Skip to content

Commit

Permalink
Merge branch 'master' into SNAP-3138
Browse files Browse the repository at this point in the history
  • Loading branch information
sonalsagarwal committed Mar 3, 2020
2 parents 402beff + c651b6b commit 5246c37
Show file tree
Hide file tree
Showing 50 changed files with 2,326 additions and 534 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -1688,7 +1688,7 @@ class PrimaryDUnitRecoveryTest(s: String) extends DistributedTestBase(s)
stmt.execute(s"ALTER TABLE $fqtn DROP COLUMN c2")
stmt.execute(s"DELETE FROM $fqtn WHERE c1 = 2")
stmt.execute(s"DELETE FROM $fqtn WHERE c1 = 5")
stmt.execute(s"ALTER TABLE $fqtn ADD COLUMN c2 integer")
stmt.execute(s"ALTER TABLE $fqtn ADD COLUMN c4 integer")
stmt.execute(s"INSERT INTO $fqtn VALUES (9, 99, 999)")

// 10: null and not null complex types 2 buckets no alter
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -168,9 +168,9 @@ class OpLogRdd(
}
}
}
assert(index != -1, s"column id not found for $fqtn.$colName")
tableColIdsMap.getOrElse(s"$version#$fqtnLowerKey",
if (index != -1) tableColIdsMap.getOrElse(s"$version#$fqtnLowerKey",
throw new IllegalStateException(s"column ids not found: $version#$fqtnLowerKey"))(index)
else -1
}

/**
Expand Down
Binary file added docs/Images/snappy-scala_api.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/Images/snappy-scala_api_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
24 changes: 8 additions & 16 deletions docs/howto/connect_using_odbc_driver.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,36 +19,28 @@ To download and install the Visual C++ Redistributable for Visual Studio 2013:

To download and install the ODBC driver:

1. [Download the TIBCO ComputeDB 1.2.0 Enterprise Version](https://edelivery.tibco.com/storefront/index.ep). The downloaded file contains the TIBCO ComputeDB ODBC driver installers.

2. Depending on your Windows installation, extract the contents of the 32-bit or 64-bit version of the TIBCO ComputeDB ODBC Driver.
1. Download the drivers zip file **TIB_compute_drivers_1.2.0_linux.zip** using the steps provided [here](/quickstart/getting_started_by_installing_snappydata_on-premise.md). After this file is extracted, you will find that it contains the ODBC installers in another file **TIB_compute-odbc_1.2.0_win.zip**.
2. Extract **TIB_compute-odbc_1.2.0_win.zip**. Depending on your Windows installation, extract the contents of the 32-bit or 64-bit version of the TIBCO ComputeDB ODBC Driver.

| Version | ODBC Driver |
|--------|--------|
|32-bit for 32-bit platform|TIB_compute-odbc_1.2.0_win_x86.zip|
|64-bit for 64-bit platform|TIB_compute-odbc_1.2.0_win_x64.zip|

4. Double-click on the extracted **TIB_compute-odbc_1.2.0_win.msi** file, and follow the steps to complete the installation.
4. Double-click on the corresponding **msi** file, and follow the steps to complete the installation.

!!! Note
Ensure that [TIBCO ComputeDB is installed](../install.md) and the [TIBCO ComputeDB cluster is running](start_snappy_cluster.md).

## Connecting to the TIBCO ComputeDB Cluster
Once you have installed the TIBCO ComputeDB ODBC Driver, you can connect to TIBCO ComputeDB cluster in any of the following ways:

* Use the TIBCO ComputeDB Driver Connection URL:

Driver=TIBCO ComputeDB ODBC Driver;server=<locator address>;port=<LocatorPort>;user=<userName>;password=<password>;load-balance=true

In case you want to connect with a specific server:

Driver=TIBCO ComputeDB ODBC Driver;server=<ServerHost>;port=<ServerPort>;user=<userName>;password=<password>;load-balance=false

!!!Note
On the AWS instance, there are issues when you connect with the locator port and address. Therefore,on the AWS instance, it is necessary to provide the load-balance=false property, while connecting to the server.
* Use the TIBCO ComputeDB Driver Connection URL:

* Create a TIBCO ComputeDB DSN (Data Source Name) using the installed TIBCO ComputeDB ODBC Driver. Refer to the Windows documentation relevant to your operating system for more information on creating a DSN. </br>
When prompted, select the TIBCO ComputeDB ODBC Driver from the list of drivers and enter a Data Source name. You can then enter either TIBCO ComputeDB Server Host, Port, User Name, and Password or TIBCO ComputeDB Locator Host, Port, User Name and Password.
Driver=TIBCO ComputeDB ODBC Driver;server=<ServerIP>;port=<ServerPort>;user=<userName>;password=<password>
* Create a TIBCO ComputeDB DSN (Data Source Name) using the installed TIBCO ComputeDB ODBC Driver. Refer to the Windows documentation relevant to your operating system for more information on creating a DSN. </br>
When prompted, select the TIBCO ComputeDB ODBC Driver from the list of drivers and enter a Data Source name, TIBCO ComputeDB Server Host, Port, User Name and Password.
Refer to the documentation for detailed information on [Setting Up TIBCO ComputeDB ODBC Driver](../setting_up_odbc_driver-tableau_desktop.md).

## Connecting Spotfire® Desktop to TIBCO ComputeDB
Expand Down
14 changes: 8 additions & 6 deletions docs/howto/use_apache_zeppelin_with_snappydata.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,13 @@ Do the following to use Apache Zeppelin with SnappyData:

1. [Download and Install SnappyData](/install/install_on_premise.md). The install zip for computeDB contains the Apache Zeppelin zip folder.
2. [Configure the SnappyData Cluster](/configuring_cluster/configuring_cluster.md).
3. Unzip the Apache Zeppelin artifact<name>.
4. Change to **Zeppelin** directory and start Zeppelin.
cd Zeppelin directory
./bin/zeppelin-daemon.sh start
5. Go to localhost:8080 or (AWS-AMI_PublicIP):8080.
3. Unzip the Apache Zeppelin artifact **zeppelin-0.8.2-snappydata-1.2.0.zip**. Change to the directory **zeppelin-0.8.2-snappydata-1.2.0** and start Apache Zeppelin server.

$ unzip zeppelin-0.8.2-snappydata-1.2.0.zip
$ cd zeppelin-0.8.2-snappydata-1.2.0/
$ ./bin/zeppelin-daemon.sh start

5. Enter this URL in the browser: localhost:8080 or (AWS-AMI_PublicIP):8080.

![homepage](../Images/zeppelin.png)

Expand Down Expand Up @@ -54,7 +56,7 @@ Refer [here](concurrent_apache_zeppelin_access_to_secure_snappydata.md) for inst
| SnappyData Zeppelin Interpreter | Apache Zeppelin Binary Package | SnappyData Release|
|--------|--------|--------|
|[Version 0.7.3.6](https://github.com/SnappyDataInc/zeppelin-interpreter/releases/tag/v0.7.3.6) |[Version 0.7.3](http://archive.apache.org/dist/zeppelin/zeppelin-0.7.3/zeppelin-0.7.3-bin-netinst.tgz) |[Release 1.1.1](https://edelivery.tibco.com)|
|[Version 0.7.3.6](https://github.com/SnappyDataInc/zeppelin-interpreter/releases/tag/v0.7.3.6) |[Version 0.7.3](http://archive.apache.org/dist/zeppelin/zeppelin-0.7.3/zeppelin-0.7.3-bin-netinst.tgz) |[Release 1.2.0](https://edelivery.tibco.com)|
2. [Configure the SnappyData Cluster](../configuring_cluster/configuring_cluster.md).
Expand Down
20 changes: 1 addition & 19 deletions docs/install/building_from_source.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

**Latest release branch**
```pre
> git clone https://github.com/SnappyDataInc/snappydata.git -b branch-<release-version> --recursive
> git clone https://github.com/SnappyDataInc/snappydata.git -b v<release-version> --recursive
> cd snappydata
> ./gradlew product
```
Expand All @@ -22,24 +22,6 @@

The product is in **build-artifacts/scala-2.11/snappy**

## Build only the Top-level Components

Use this option if you want to build only the top-level SnappyData project and pull in jars for other projects (spark, store, spark-jobserver):

**Latest release branch**
```pre
> git clone https://github.com/SnappyDataInc/snappydata.git -b branch-<release-version>
> cd snappydata
> ./gradlew product
```

**Master**
```pre
> git clone https://github.com/SnappyDataInc/snappydata.git
> cd snappydata
> ./gradlew product
```

## Repository Layout

- **core** - Extensions to Apache Spark that should not be dependent on SnappyData Spark additions, job server etc. It is also the bridge between _spark_ and _store_ (GemFireXD). For example, SnappyContext, row and column store, streaming additions etc.
Expand Down
2 changes: 1 addition & 1 deletion docs/install/install_on_premise.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ If all the machines in your cluster can share a path over an NFS or similar prot
2. Extract the downloaded archive file and go to SnappyData home directory.

$ tar -xzf snappydata-<version-number>-bin.tar.gz
$ cd snappydata-<version-number>.-bin/
$ cd snappydata-<version-number>-bin/

3. Configure the cluster as described in [Configuring the Cluster](../configuring_cluster/configuring_cluster.md).

Expand Down
4 changes: 2 additions & 2 deletions docs/install/system_requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,15 +53,15 @@ Requirements for each host:

* If you deploy SnappyData on a virtualized host, consult the documentation provided with the platform, for system requirements and recommended best practices, for running Java and latency-sensitive workloads.

## VSD Requirements
<!---## VSD Requirements
<ent>This feature is available only in the Enterprise version of SnappyData. </br></ent>
- Install 32-bit libraries on 64-bit Linux:</br>
`yum install glibc.i686 libX11.i686` on RHEL/CentOS</br>
`apt-get install libc6:i386 libx11-6:i386` on Ubuntu/Debian like systems</br>
- Locally running X server. For example, an X server implementation like, XQuartz for Mac OS, Xming for Windows OS, and Xorg which is installed by default for Linux systems.
- Locally running X server. For example, an X server implementation like, XQuartz for Mac OS, Xming for Windows OS, and Xorg which is installed by default for Linux systems.--->

## Python Integration using pyspark
- The Python pyspark module has the same requirements as in Apache Spark. The numpy package is required by many modules of pyspark including the examples shipped with SnappyData. On recent Red Hat based systems, it can be installed using `sudo yum install numpy` or `sudo yum install python2-numpy` commands. Whereas, on Debian/Ubuntu based systems, you can install using the `sudo apt-get install python-numpy` command.
Expand Down
Loading

0 comments on commit 5246c37

Please sign in to comment.