Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SEDONA-655] DBSCAN #1589

Merged
merged 12 commits into from
Oct 8, 2024
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions docs/api/stats/sql.md
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is not put in the mkdocs.yml hence it will show up on the website navigation bar.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on it. and fixing test failures.

Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
## Overview
The stats module of Sedona implements Scala and Python functions that can be called on dataframes with spatial columns to perform geospatial statistical analysis. The stats module is built on top of the core module and provides a set of functions that can be used to perform spatial analysis on dataframes. The stats module is designed to be used with the core module and the viz module to provide a complete set of geospatial analysis tools.
james-willis marked this conversation as resolved.
Show resolved Hide resolved

## Using DBSCAN
The DBSCAN function is provided at `org.apache.sedona.stats.DBSCAN.dbscan` in scala/java and `sedona.stats.dbscan.dbscan` in python.

The function annotates a dataframe with a cluster label for each data record using the DBSCAN algorithm.
The dataframe should contain at least one GeometryType column. Rows must be unique. If one
james-willis marked this conversation as resolved.
Show resolved Hide resolved
geometry column is present it will be used automatically. If two are present, the one named
'geometry' will be used. If more than one are present and none are named 'geometry', the
column name must be provided. The new column will be named 'cluster'.

#### Parameters
names in parentheses are python variable names
- dataframe - dataframe to cluster. Must contain at least one GeometryType column
- epsilon - minimum distance parameter of DBSCAN algorithm
- minPts (min_pts) - minimum number of points parameter of DBSCAN algorithm
- geometry - name of the geometry column
- includeOutliers (include_outliers) - whether to include outliers in the output. Default is false
- useSpheroid (use_spheroid) - whether to use a cartesian or spheroidal distance calculation. Default is false


The output is the input DataFrame with the cluster label added to each row. Outlier will have a cluster value of -1 if included.
52 changes: 52 additions & 0 deletions docs/tutorial/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -739,6 +739,58 @@ The coordinates of polygons have been changed. The output will be like this:

```

## Cluster with DBSCAN
Sedona provides an implementation of the [DBSCAN](https://en.wikipedia.org/wiki/Dbscan) algorithm to cluster spatial data.

The algorithm is available as a Scala and Python function called on a spatial dataframe. The returned dataframe has an additional column added containing the unique identifier of the cluster that record is a member of and a boolean column indicating if the record is a core point.

The first parameter is the dataframe, the next two are the epsilon and min_points parameters of the DBSCAN algorithm.

=== "Scala"

```scala
import org.apache.sedona.stats.DBSCAN.dbscan

dbscan(df, 0.1, 5).show()
```

=== "Java"

```java
import org.apache.sedona.stats.DBSCAN;

DBSCAN.dbscan(df, 0.1, 5).show();
```

=== "Python"

```python
from sedona.stats.dbscan import dbscan

dbscan(df, 0.1, 5).show()
```

The output will look like this:
```
+----------------+---+------+-------+
| geometry| id|isCore|cluster|
+----------------+---+------+-------+
| POINT (2.5 4)| 3| false| 1|
| POINT (3 4)| 2| false| 1|
| POINT (3 5)| 5| false| 1|
| POINT (1 3)| 9| true| 0|
| POINT (2.5 4.5)| 7| true| 1|
| POINT (1 2)| 1| true| 0|
| POINT (1.5 2.5)| 4| true| 0|
| POINT (1.2 2.5)| 8| true| 0|
| POINT (1 2.5)| 11| true| 0|
| POINT (1 5)| 10| false| -1|
| POINT (5 6)| 12| false| -1|
|POINT (12.8 4.5)| 6| false| -1|
| POINT (4 3)| 13| false| -1|
+----------------+---+------+-------+
```

## Run spatial queries

After creating a Geometry type column, you are able to run spatial queries.
Expand Down
13 changes: 13 additions & 0 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@
<spark.version>3.3.0</spark.version>
<spark.compat.version>3.3</spark.compat.version>
<log4j.version>2.17.2</log4j.version>
<graphframe.version>0.8.3-spark3.4</graphframe.version>

<flink.version>1.19.0</flink.version>
<slf4j.version>1.7.36</slf4j.version>
Expand Down Expand Up @@ -394,6 +395,10 @@
<enabled>true</enabled>
</releases>
</repository>
<repository>
<id>Spark Packages</id>
<url>https://repos.spark-packages.org/</url>
</repository>
</repositories>
<build>
<pluginManagement>
Expand Down Expand Up @@ -578,6 +583,8 @@
<scala.compat.version>${scala.compat.version}</scala.compat.version>
<spark.version>${spark.version}</spark.version>
<scala.version>${scala.version}</scala.version>
<log4j.version>${log4j.version}</log4j.version>
<graphframe.version>${graphframe.version}</graphframe.version>
</properties>
</configuration>
<executions>
Expand Down Expand Up @@ -686,6 +693,7 @@
<spark.version>3.0.3</spark.version>
<spark.compat.version>3.0</spark.compat.version>
<log4j.version>2.17.2</log4j.version>
<graphframe.version>0.8.1-spark3.0</graphframe.version>
<!-- Skip deploying parent module. it will be deployed with sedona-spark-3.3 -->
<skip.deploy.common.modules>true</skip.deploy.common.modules>
</properties>
Expand All @@ -703,6 +711,7 @@
<spark.version>3.1.2</spark.version>
<spark.compat.version>3.1</spark.compat.version>
<log4j.version>2.17.2</log4j.version>
<graphframe.version>0.8.2-spark3.1</graphframe.version>
<!-- Skip deploying parent module. it will be deployed with sedona-spark-3.3 -->
<skip.deploy.common.modules>true</skip.deploy.common.modules>
</properties>
Expand All @@ -720,6 +729,7 @@
<spark.version>3.2.0</spark.version>
<spark.compat.version>3.2</spark.compat.version>
<log4j.version>2.17.2</log4j.version>
<graphframe.version>0.8.2-spark3.2</graphframe.version>
<!-- Skip deploying parent module. it will be deployed with sedona-spark-3.3 -->
<skip.deploy.common.modules>true</skip.deploy.common.modules>
</properties>
Expand All @@ -738,6 +748,7 @@
<spark.version>3.3.0</spark.version>
<spark.compat.version>3.3</spark.compat.version>
<log4j.version>2.17.2</log4j.version>
<graphframe.version>0.8.3-spark3.4</graphframe.version>
</properties>
</profile>
<profile>
Expand All @@ -752,6 +763,7 @@
<spark.version>3.4.0</spark.version>
<spark.compat.version>3.4</spark.compat.version>
<log4j.version>2.19.0</log4j.version>
<graphframe.version>0.8.3-spark3.4</graphframe.version>
<!-- Skip deploying parent module. it will be deployed with sedona-spark-3.3 -->
<skip.deploy.common.modules>true</skip.deploy.common.modules>
</properties>
Expand All @@ -768,6 +780,7 @@
<spark.version>3.5.0</spark.version>
<spark.compat.version>3.5</spark.compat.version>
<log4j.version>2.20.0</log4j.version>
<graphframe.version>0.8.3-spark3.5</graphframe.version>
<!-- Skip deploying parent module. it will be deployed with sedona-spark-3.3 -->
<skip.deploy.common.modules>true</skip.deploy.common.modules>
</properties>
Expand Down
2 changes: 2 additions & 0 deletions python/Pipfile
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ jupyter="*"
mkdocs="*"
pytest-cov = "*"

scikit-learn = "*"

[packages]
pandas="<=1.5.3"
numpy="<2"
Expand Down
16 changes: 16 additions & 0 deletions python/sedona/stats/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
21 changes: 21 additions & 0 deletions python/sedona/stats/clustering/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

"""The clustering module contains spark based implementations of popular geospatial clustering algorithms.

These implementations are designed to scale to larger datasets and support various geometric feature types.
"""
68 changes: 68 additions & 0 deletions python/sedona/stats/clustering/dbscan.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

"""DBSCAN is a popular clustering algorithm for spatial data.

It identifies groups of data where enough records are close enough to each other. This implementation leverages spark,
sedona and graphframes to support large scale datasets and various, heterogeneous geometric feature types.
"""
from typing import Optional

from pyspark.sql import DataFrame, SparkSession

ID_COLUMN_NAME = "__id"
DEFAULT_MAX_SAMPLE_SIZE = 1000000 # 1 million


def dbscan(
dataframe: DataFrame,
epsilon: float,
min_pts: int,
geometry: Optional[str] = None,
include_outliers: bool = True,
use_spheroid=False,
):
"""Annotates a dataframe with a cluster label for each data record using the DBSCAN algorithm.

The dataframe should contain at least one GeometryType column. Rows must be unique. If one geometry column is
present it will be used automatically. If two are present, the one named 'geometry' will be used. If more than one
are present and neither is named 'geometry', the column name must be provided.

Args:
dataframe: spark dataframe containing the geometries
epsilon: minimum distance parameter of DBSCAN algorithm
min_pts: minimum number of points parameter of DBSCAN algorithm
geometry: name of the geometry column
include_outliers: whether to return outlier points. If True, outliers are returned with a cluster value of -1.
Default is False
use_spheroid: whether to use a cartesian or spheroidal distance calculation. Default is false

Returns:
A PySpark DataFrame containing the cluster label for each row
"""
sedona = SparkSession.getActiveSession()

result_df = sedona._jvm.org.apache.sedona.stats.clustering.DBSCAN.dbscan(
dataframe._jdf,
float(epsilon),
min_pts,
geometry,
include_outliers,
use_spheroid,
)

return DataFrame(result_df, sedona)
36 changes: 36 additions & 0 deletions python/sedona/stats/utils/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

from pyspark.sql import DataFrame, Column, SparkSession
from sedona.sql.types import GeometryType


def get_geometry_column_name(df: DataFrame) -> Column:
james-willis marked this conversation as resolved.
Show resolved Hide resolved
geom_fields = [
field.name for field in df.schema.fields if field.dataType == GeometryType()
]

if len(geom_fields) == 0:
raise ValueError("No GeometryType column found. Provide a dataframe containing a geometry column.")

if len(geom_fields) == 1:
return geom_fields[0]

if len(geom_fields) > 1 and "geometry" not in geom_fields:
raise ValueError("Multiple GeometryType columns found. Provide the column name as an argument.")

return "geometry"
Empty file.
Loading
Loading