Skip to content

[SPARK-1390] Refactoring of matrices backed by RDDs #296

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 14 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view

This file was deleted.

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.examples.mllib

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.mllib.linalg.distributed.RowMatrix
import org.apache.spark.mllib.linalg.Vectors

/**
* Compute the principal components of a tall-and-skinny matrix, whose rows are observations.
*
* The input matrix must be stored in row-oriented dense format, one line per row with its entries
* separated by space. For example,
* {{{
* 0.5 1.0
* 2.0 3.0
* 4.0 5.0
* }}}
* represents a 3-by-2 matrix, whose first row is (0.5, 1.0).
*/
object TallSkinnyPCA {
def main(args: Array[String]) {
if (args.length != 2) {
System.err.println("Usage: TallSkinnyPCA <master> <file>")
System.exit(1)
}

val conf = new SparkConf()
.setMaster(args(0))
.setAppName("TallSkinnyPCA")
.setSparkHome(System.getenv("SPARK_HOME"))
.setJars(SparkContext.jarOfClass(this.getClass))
val sc = new SparkContext(conf)

// Load and parse the data file.
val rows = sc.textFile(args(1)).map { line =>
val values = line.split(' ').map(_.toDouble)
Vectors.dense(values)
}
val mat = new RowMatrix(rows)

// Compute principal components.
val pc = mat.computePrincipalComponents(mat.numCols().toInt)

println("Principal components are:\n" + pc)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will likely only print a reference - maybe just print out diagnostic information like "pc.numCols() principal vectors computed"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implemented toString for Matrix.


sc.stop()
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.examples.mllib

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.mllib.linalg.distributed.RowMatrix
import org.apache.spark.mllib.linalg.Vectors

/**
* Compute the singular value decomposition (SVD) of a tall-and-skinny matrix.
*
* The input matrix must be stored in row-oriented dense format, one line per row with its entries
* separated by space. For example,
* {{{
* 0.5 1.0
* 2.0 3.0
* 4.0 5.0
* }}}
* represents a 3-by-2 matrix, whose first row is (0.5, 1.0).
*/
object TallSkinnySVD {
def main(args: Array[String]) {
if (args.length != 2) {
System.err.println("Usage: TallSkinnySVD <master> <file>")
System.exit(1)
}

val conf = new SparkConf()
.setMaster(args(0))
.setAppName("TallSkinnySVD")
.setSparkHome(System.getenv("SPARK_HOME"))
.setJars(SparkContext.jarOfClass(this.getClass))
val sc = new SparkContext(conf)

// Load and parse the data file.
val rows = sc.textFile(args(1)).map { line =>
val values = line.split(' ').map(_.toDouble)
Vectors.dense(values)
}
val mat = new RowMatrix(rows)

// Compute SVD.
val svd = mat.computeSVD(mat.numCols().toInt)

println("Singular values are " + svd.s)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will just print the reference to the array? Maybe use mkString

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Vector has a toString implementation.


sc.stop()
}
}
101 changes: 101 additions & 0 deletions mllib/src/main/scala/org/apache/spark/mllib/linalg/Matrices.scala
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.mllib.linalg

import breeze.linalg.{Matrix => BM, DenseMatrix => BDM}

/**
* Trait for a local matrix.
*/
trait Matrix extends Serializable {

/** Number of rows. */
def numRows: Int

/** Number of columns. */
def numCols: Int

/** Converts to a dense array in column major. */
def toArray: Array[Double]

/** Converts to a breeze matrix. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To leave breadcrumbs: you decided that it is better to use Breeze library instead of DoubleMatrix since it has Sparse vector support. http://www.scalanlp.org/api/breeze/index.html#breeze.linalg.package

private[mllib] def toBreeze: BM[Double]

/** Gets the (i, j)-th element. */
private[mllib] def apply(i: Int, j: Int): Double = toBreeze(i, j)

override def toString: String = toBreeze.toString()
}

/**
* Column-majored dense matrix.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you define what column-major means somewhere? It is good you use the terminology consistently throughout, but I can't find the definition in our documentation. Some users might not be aware what it means.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the definition of column major and an example.

* The entry values are stored in a single array of doubles with columns listed in sequence.
* For example, the following matrix
* {{{
* 1.0 2.0
* 3.0 4.0
* 5.0 6.0
* }}}
* is stored as `[1.0, 3.0, 5.0, 2.0, 4.0, 6.0]`.
*
* @param numRows number of rows
* @param numCols number of columns
* @param values matrix entries in column major
*/
class DenseMatrix(val numRows: Int, val numCols: Int, val values: Array[Double]) extends Matrix {

require(values.length == numRows * numCols)

override def toArray: Array[Double] = values

private[mllib] override def toBreeze: BM[Double] = new BDM[Double](numRows, numCols, values)
}

/**
* Factory methods for [[org.apache.spark.mllib.linalg.Matrix]].
*/
object Matrices {

/**
* Creates a column-majored dense matrix.
*
* @param numRows number of rows
* @param numCols number of columns
* @param values matrix entries in column major
*/
def dense(numRows: Int, numCols: Int, values: Array[Double]): Matrix = {
new DenseMatrix(numRows, numCols, values)
}

/**
* Creates a Matrix instance from a breeze matrix.
* @param breeze a breeze matrix
* @return a Matrix instance
*/
private[mllib] def fromBreeze(breeze: BM[Double]): Matrix = {
breeze match {
case dm: BDM[Double] =>
require(dm.majorStride == dm.rows,
"Do not support stride size different from the number of rows.")
new DenseMatrix(dm.rows, dm.cols, dm.data)
case _ =>
throw new UnsupportedOperationException(
s"Do not support conversion from type ${breeze.getClass.getName}.")
}
}
}
29 changes: 0 additions & 29 deletions mllib/src/main/scala/org/apache/spark/mllib/linalg/MatrixSVD.scala

This file was deleted.

Loading