Skip to content

Commit 3bb48b8

Browse files
committed
chore: bump version number
1 parent b0797b3 commit 3bb48b8

File tree

4 files changed

+17
-37
lines changed

4 files changed

+17
-37
lines changed

README.md

Lines changed: 8 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
[![Build Status](https://msazure.visualstudio.com/Cognitive%20Services/_apis/build/status/Azure.mmlspark?branchName=master)](https://msazure.visualstudio.com/Cognitive%20Services/_build/latest?definitionId=83120&branchName=master) [![codecov](https://codecov.io/gh/Azure/mmlspark/branch/master/graph/badge.svg)](https://codecov.io/gh/Azure/mmlspark) [![Gitter](https://badges.gitter.im/Microsoft/MMLSpark.svg)](https://gitter.im/Microsoft/MMLSpark?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
66

7-
[![Release Notes](https://img.shields.io/badge/release-notes-blue)](https://github.com/Azure/mmlspark/releases) [![Release Notes](https://img.shields.io/badge/version-0.17-blue)](https://github.com/Azure/mmlspark/releases) [![version](https://mmlspark.blob.core.windows.net/icons/badges/master_version3.svg)](#sbt)
7+
[![Release Notes](https://img.shields.io/badge/release-notes-blue)](https://github.com/Azure/mmlspark/releases) [![Release Notes](https://img.shields.io/badge/version-0.18.0-blue)](https://github.com/Azure/mmlspark/releases) [![version](https://mmlspark.blob.core.windows.net/icons/badges/master_version3.svg)](#sbt)
88

99

1010
MMLSpark is an ecosystem of tools aimed towards expanding the distributed computing framework
@@ -129,9 +129,9 @@ MMLSpark can be conveniently installed on existing Spark clusters via the
129129
`--packages` option, examples:
130130

131131
```bash
132-
spark-shell --packages Azure:mmlspark:0.17
133-
pyspark --packages Azure:mmlspark:0.17
134-
spark-submit --packages Azure:mmlspark:0.17 MyApp.jar
132+
spark-shell --packages com.microsoft.ml.spark:mmlspark_2.11:0.18.0
133+
pyspark --packages com.microsoft.ml.spark:mmlspark_2.11:0.18.0
134+
spark-submit --packages com.microsoft.ml.spark:mmlspark_2.11:0.18.0 MyApp.jar
135135
```
136136

137137
This can be used in other Spark contexts too. For example, you can use MMLSpark
@@ -146,14 +146,14 @@ cloud](http://community.cloud.databricks.com), create a new [library from Maven
146146
coordinates](https://docs.databricks.com/user-guide/libraries.html#libraries-from-maven-pypi-or-spark-packages)
147147
in your workspace.
148148

149-
For the coordinates use: `Azure:mmlspark:0.17`. Ensure this library is
149+
For the coordinates use: `com.microsoft.ml.spark:mmlspark_2.11:0.18.0`. Ensure this library is
150150
attached to all clusters you create.
151151

152152
Finally, ensure that your Spark cluster has at least Spark 2.1 and Scala 2.11.
153153

154154
You can use MMLSpark in both your Scala and PySpark notebooks. To get started with our example notebooks import the following databricks archive:
155155

156-
`https://mmlspark.blob.core.windows.net/dbcs/MMLSpark%20Examples%20v0.17.dbc`
156+
`https://mmlspark.blob.core.windows.net/dbcs/MMLSpark%20Examples%20v0.18.0.dbc`
157157

158158
### Docker
159159

@@ -185,39 +185,20 @@ the above example, or from python:
185185
```python
186186
import pyspark
187187
spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
188-
.config("spark.jars.packages", "Azure:mmlspark:0.17") \
188+
.config("spark.jars.packages", "com.microsoft.ml.spark:mmlspark_2.11:0.18.0") \
189189
.getOrCreate()
190190
import mmlspark
191191
```
192192

193193
<img title="Script action submission" src="http://i.imgur.com/oQcS0R2.png" align="right" />
194194

195-
### HDInsight
196-
197-
To install MMLSpark on an existing [HDInsight Spark
198-
Cluster](https://docs.microsoft.com/en-us/azure/hdinsight/), you can execute a
199-
script action on the cluster head and worker nodes. For instructions on
200-
running script actions, see [this
201-
guide](https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-customize-cluster-linux#use-a-script-action-during-cluster-creation).
202-
203-
The script action url is:
204-
<https://mmlspark.azureedge.net/buildartifacts/0.17/install-mmlspark.sh>.
205-
206-
If you're using the Azure Portal to run the script action, go to `Script
207-
actions``Submit new` in the `Overview` section of your cluster blade. In
208-
the `Bash script URI` field, input the script action URL provided above. Mark
209-
the rest of the options as shown on the screenshot to the right.
210-
211-
Submit, and the cluster should finish configuring within 10 minutes or so.
212-
213195
### SBT
214196

215197
If you are building a Spark application in Scala, add the following lines to
216198
your `build.sbt`:
217199

218200
```scala
219-
resolvers += "MMLSpark Repo" at "https://mmlspark.azureedge.net/maven"
220-
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.17"
201+
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.18.0"
221202
```
222203

223204
### Building from source

docs/R-setup.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ To install the current MMLSpark package for R use:
1010

1111
```R
1212
...
13-
devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.17.zip")
13+
devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.18.0.zip")
1414
...
1515
```
1616

@@ -23,7 +23,7 @@ It will take some time to install all dependencies. Then, run:
2323
library(sparklyr)
2424
library(dplyr)
2525
config <- spark_config()
26-
config$sparklyr.defaultPackages <- "Azure:mmlspark:0.17"
26+
config$sparklyr.defaultPackages <- "com.microsoft.ml.spark:mmlspark_2.11:0.18.0"
2727
sc <- spark_connect(master = "local", config = config)
2828
...
2929
```
@@ -83,7 +83,7 @@ and then use spark_connect with method = "databricks":
8383

8484
```R
8585
install.packages("devtools")
86-
devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.17.zip")
86+
devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.18.0.zip")
8787
library(sparklyr)
8888
library(dplyr)
8989
sc <- spark_connect(method = "databricks")

docs/docker.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ You can now select one of the sample notebooks and run it, or create your own.
2828
In the above, `mcr.microsoft.com/mmlspark/release` specifies the project and image name that you
2929
want to run. There is another component implicit here which is the _tag_ (=
3030
version) that you want to use — specifying it explicitly looks like
31-
`mcr.microsoft.com/mmlspark/release:0.17` for the `0.17` tag.
31+
`mcr.microsoft.com/mmlspark/release:0.18.0` for the `0.18.0` tag.
3232

3333
Leaving `mcr.microsoft.com/mmlspark/release` by itself has an implicit `latest` tag, so it is
3434
equivalent to `mcr.microsoft.com/mmlspark/release:latest`. The `latest` tag is identical to the
@@ -42,21 +42,19 @@ that you will probably want to use can look as follows:
4242

4343
```bash
4444
docker run -it --rm \
45-
-e ACCEPT_EULA=y \
4645
-p 127.0.0.1:80:8888 \
4746
-v ~/myfiles:/notebooks/myfiles \
48-
mcr.microsoft.com/mmlspark/release:0.17
47+
mcr.microsoft.com/mmlspark/release:0.18.0
4948
```
5049

5150
In this example, backslashes are used to break things up for readability; you
5251
can enter it as one long like. Note that in powershell, the `myfiles` local
5352
path and line breaks looks a little different:
5453

5554
docker run -it --rm `
56-
-e ACCEPT_EULA=y `
5755
-p 127.0.0.1:80:8888 `
5856
-v C:\myfiles:/notebooks/myfiles `
59-
mcr.microsoft.com/mmlspark/release:0.17
57+
mcr.microsoft.com/mmlspark/release:0.18.0
6058

6159
Let's break this command and go over the meaning of each part:
6260

@@ -139,7 +137,7 @@ Let's break this command and go over the meaning of each part:
139137
model.write().overwrite().save('myfiles/myTrainedModel.mml')
140138
```
141139

142-
- **`mcr.microsoft.com/mmlspark/release:0.17`**
140+
- **`mcr.microsoft.com/mmlspark/release:0.18.0`**
143141

144142
Finally, this specifies an explicit version tag for the image that we want to
145143
run.

src/main/scala/com/microsoft/ml/spark/cognitive/CognitiveServiceBase.scala

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ package com.microsoft.ml.spark.cognitive
55

66
import java.net.URI
77

8+
import com.microsoft.ml.spark.build.BuildInfo
89
import com.microsoft.ml.spark.core.contracts.HasOutputCol
910
import com.microsoft.ml.spark.core.schema.DatasetExtensions
1011
import com.microsoft.ml.spark.io.http._
@@ -179,7 +180,7 @@ object URLEncodingUtils {
179180
object CognitiveServiceUtils {
180181

181182
def setUA(req: HttpRequestBase): Unit = {
182-
req.setHeader("User-Agent", "mmlspark/0.17")
183+
req.setHeader("User-Agent", s"mmlspark/${BuildInfo.version}")
183184
}
184185
}
185186

0 commit comments

Comments
 (0)