Elastic Enterprise Search | Elastic Workplace Search
For new users, we recommend using our Elasticsearch native tools, rather than the standalone Workplace Search product. See this blog post for more information about upgrading your internal knowledge search, to make it an amazing experience for your users!
We recommend using the new Elastic Network Drive connector to ingest your content from your network drive in regular Elasticsearch indices.
Use this Elastic Enterprise Search network drives connector package to deploy and run a network drive connector on your own infrastructure. The connector package extracts and syncs data from network drives. The data is indexed into a Workplace Search content source within an Elastic deployment.
ℹ️ This connector package requires a compatible Elastic subscription level. Refer to the Elastic subscriptions pages for Elastic Cloud and self-managed deployments.
Table of contents:
- Setup and basic usage
- Gather network drives details
- Gather Elastic details
- Create a Workplace Search API key
- Create a Workplace Search content source
- Choose connector infrastructure and satisfy dependencies
- Install the connector
- Configure the connector
- Test the connection
- Sync data
- Log errors and exceptions
- Schedule recurring syncs
- Troubleshooting
- Advanced usage
- Connector reference
- Connector Limitations
Complete the following steps to deploy and run the connector:
- Gather network drives details
- Gather Elastic details
- Create a Workplace Search API key
- Create a Workplace Search content source
- Choose connector infrastructure and satisfy dependencies
- Install the connector
- Configure the connector
- Test the connection
- Sync data
- Log errors and exceptions
- Schedule recurring syncs
The steps above are relevant to all users. Some users may require additional features. These are covered in the following sections:
Collect the information that is required to connect to your network drives:
- The domain name where the network drive is present.
- The network drives path the connector will crawl to fetch files. This is the name of the folder shared via SMB, followed by the path relative to that shared folder. For example: if the name of the of SMB shared folder is
Users
and the folder to be fetched isfolder1
, then the path isUsers/folder1
. - The username the connector will use to log in to network drives.
- The password the connector will use to log in to network drives.
- The server IP address where the network drive is hosted.
- The name of the server where the network drive is found. (Note that server name may not be the same as the hostname. On Windows, you can get the machine name from the System panel.)
- The client machine name can be a random client name of up to 15 characters.
ℹ️ The user credentials provided for the connector must have at least read permissions for the folder path provided.
Later, you will configure the connector with these values.
ℹ️ The connector uses the pysmb module to connect to your Network Drives. pysmb
supports SMB1 and SMB2.
Some connector features require additional details. Review the following documentation if you plan to use these features:
First, ensure your Elastic deployment is compatible with the network drives connector package.
Next, determine the Enterprise Search base URL for your Elastic deployment.
Later, you will configure the connector with this value.
You also need a Workplace Search API key and a Workplace Search content source ID. You will create those in the following sections.
If you plan to use document-level permissions, you will also need user identity information. See Use document-level permissions (DLP) for details.
Each network drives connector authorizes its connection to Elastic using a Workplace Search API key.
Create an API key within Kibana. See Workplace Search API keys.
Each network drives connector syncs data from network drives into a Workplace Search content source.
Create a content source within Kibana:
- Navigate to Enterprise Search → Workplace Search → Sources → Add an organization content source → Custom API Source.
- Name your Content Source, (e.g. Network Drives Connector).
For more details see Elastic Documentation for creating a custom content source.
Record the ID of the new content source. This value is labeled Source Identifier within Kibana. Later, you will configure the connector with this value.
Alternatively, you can use the connector’s bootstrap
command to create the content source. See bootstrap
command.
After you’ve prepared the two services, you are ready to connect them.
Provision a Windows, MacOS, or Linux server for your network drives connectors.
The infrastructure must provide the necessary runtime dependencies. See Runtime dependencies.
Clone or copy the contents of this repository to your infrastructure.
After you’ve provisioned infrastructure and copied the package, use the provided make
target to install the connector:
make install_package
This command runs as the current user and installs the connector and its dependencies. Note: By Default, the package installed supports Enterprise Search version 8.0 or above. In order to use the connector for older versions of Enterprise Search(less than version 8.0) use the ES_VERSION_V8 argument while running make install_package or make install_locally command:
make install_package ES_VERSION_V8=no
ℹ️ Within a Windows environment, first install make
:
winget install make
Next, ensure the ees_network_drive
executable is on your PATH
. For example, on macOS:
export PATH=/Users/shaybanon/Library/Python/3.8/bin:$PATH
The following table provides the installation location for each operating system (note Python version 3.8):
Operating system | Installation location |
---|---|
Linux | ./local/bin |
macOS | /Users/<user_name>/Library/Python/3.8/bin |
Windows | \Users\<user_name>\AppData\Roaming\Python\Python38\Scripts |
You must configure the connector to provide the information necessary to communicate with each service. You can provide additional configuration to customize the connector for your needs.
Create a YAML configuration file at any pathname. Later, you will include the -c
option when running commands to specify the pathname to this configuration file.
Alternatively, in Linux environments only, locate the default configuration file created during installation. The file is named network_drives_connector.yml
and is located within the config
subdirectory where the package files were installed. See Install the connector for a listing of installation locations by operating system. When you use the default configuration file, you do not need to include the -c
option when running commands.
After you’ve located or created the configuration file, populate each of the configuration settings. Refer to the settings reference. You must provide a value for all required settings.
Use the additional settings to customize the connection and manage features such as document-level permissions. See:
After you’ve configured the connector, you can test the connection between Elastic and network drives. Use the following make
target to test the connection:
make test_connectivity
After you’ve confirmed the connection between the two services, you are ready to sync data from network drives to Elastic.
The following table lists the available sync operations, as well as the commands to perform the operations.
Operation | Command |
---|---|
Incremental sync | incremental-sync |
Full sync | full-sync |
Deletion sync | deletion-sync |
Permission sync | permission-sync |
Begin syncing with an incremental sync. This operation begins extracting and syncing content from network drives to Elastic. If desired, customize extraction and syncing for your use case.
Review the additional sync operations to learn about the different types of syncs. Additional configuration is required to use document-level permissions.
You can use the command line interface to run sync operations on demand, but you will likely want to schedule recurring syncs.
The various sync commands write logs to standard output and standard error.
To persist logs, redirect standard output and standard error to a file. For example:
ees_network_drive -c ~/config.yml incremental-sync >>~/incremental-sync.log 2>&1
You can use these log files to implement your own monitoring and alerting solution.
Configure the log level using the log_level
setting.
Use a job scheduler, such as cron
, to run the various sync commands as recurring syncs.
The following is an example crontab file:
0 */2 * * * ees_network_drive -c ~/config.yml incremental-sync >>~/incremental-sync.log 2>&1
0 0 */2 * * ees_network_drive -c ~/config.yml full-sync >>~/full-sync.log 2>&1
0 * * * * ees_network_drive -c ~/config.yml deletion-sync >>~/deletion-sync.log 2>&1
*/5 * * * * ees_network_drive -c ~/config.yml permission-sync >>~/permission-sync.log 2>&1
This example redirects standard output and standard error to files, as explained here: Log errors and exceptions.
Use this example to create your own crontab file. Manually add the file to your crontab using crontab -e
. Or, if your system supports cron.d, copy or symlink the file into /etc/cron.d/
.
flock
command is part of the util-linux
package. You can install it with yum install util-linux
or sudo apt-get install -y util-linux
.
Using flock ensures the next scheduled cron runs only after the current one has completed execution.
Let's consider an example of running incremental-sync as a cron job with flock:
0 */2 * * * /usr/bin/flock -w 0 /var/cron.lock ees_network_drive -c ~/config.yml incremental-sync >>~/incremental-sync.log 2>&1
To troubleshoot an issue, first view your logged errors and exceptions.
Use the following section to help troubleshoot further:
If you need assistance, use the Elastic community forums or Elastic support:
The following sections provide solutions for content extraction issues.
The connector uses the Tika module for parsing file contents from attachments. Tika-python uses Apache Tika REST server. To use this library, you need to have Java 7+ installed on your system as tika-python starts up the Tika REST server in the background.
At times, the TIKA server fails to start hence content extraction from attachments may fail. To avoid this, make sure Tika is running in the background.
Tika Server also detects contents from images by automatically calling Tesseract OCR. To allow Tika to also extract content from images, you need to make sure tesseract is on your path and then restart tika-server in the background (if it is already running). For example, on a Unix-like system, try:
ps aux | grep tika | grep server # find PID
kill -9 <PID>
To allow Tika to extract content from images, you need to manually install Tesseract OCR.
The following sections cover additional features that are not covered by the basic usage described above.
After you’ve set up your first connection, you may want to further customize that connection or scale to multiple connections.
The connector will support the following rules for filtering files:
- Size: To filter all files of size > x bytes, add
>x
to theinclude/exclude
field. - Path Template: To index files in specific subdirectories or files with specific names/types, use a glob pattern.
The Path Template can be used for following use cases:
- Folder: For example
/Engineering/**/*
will filter all the content inside the Engineering folder. - File Name: To filter files containing a given string pattern, use the following glob pattern:
**/<filename>.*
: files named<filename>
.**/main*.*
: filenames that start with the substringmain
.
- File Type: To filter
pdf
media type files, use the glob pattern**/*.pdf
.
These can be defined under include/exclude parameters.
Finally, you can set custom timestamps to control which objects are synced, based on their created or modified timestamps. Configure the following settings:
Complete the following steps to use document-level permissions:
- Enable document-level permissions
- Map user identities
- Sync document-level permissions data
Within your configuration, enable document-level permissions using the following setting: enable_document_permission
.
Copy to your server a CSV file that provides the mapping of user identities. The file must follow this format:
- First column: network drive user's/group's SID
- Second column: Elastic username
Then, configure the location of the CSV file using the following setting: network_drives_enterprise_search_user_mapping
.
Sync document-level permissions data from network drives to Elastic.
The following sync operations include permissions data:
Sync this information continually to ensure correct permissions. See Schedule recurring syncs.
The following reference sections provide technical details:
- Data extraction and syncing
- Sync operations
- Command line interface (CLI)
- Configuration settings
- Enterprise Search compatibility
- Runtime dependencies
Each network drives connector extracts and syncs files from network drives:
It extracts content from various document formats, and it performs optical character recognition (OCR) to extract content from images.
You can customize extraction and syncing per connector. See Customize extraction and syncing.
The following sections describe the various operations to sync data from network drives to Elastic.
Syncs to Enterprise Search all files created or modified since the previous incremental sync.
When using document-level permissions (DLP), each incremental sync will also perform a permission sync.
Perform this operation with the incremental-sync
command.
Syncs to Enterprise Search all files created or modified since the configured start_time
. Continues until the current time or the configured end_time
.
Perform this operation with the full-sync
command.
Deletes from Enterprise Search all files deleted since the previous deletion sync.
Perform this operation with the deletion-sync
command.
Syncs to Enterprise Search all file document permissions since the previous permission sync.
When using document-level permissions (DLP), use this operation to sync all updates to users and groups within network drives.
Perform this operation with the permission-sync
command.
Each network drives connector has the following command line interface (CLI):
ees_network_drive [-c <pathname>] <command>
The pathname of the configuration file to use for the given command.
ees_network_drive -c ~/config.yml full-sync
Creates a Workplace Search content source with the given name. Outputs its ID.
ees_network_drive bootstrap --name 'Accounting documents' --user 'shay.banon'
See also Create a Workplace Search content source.
To use this command, you must configure the following settings:
And you must provide on the command line any of the following arguments that are required:
--name
(required): The name of the Workplace Search content source to create.--user
(optional): The username of the Elastic user who will own the content source. If provided, the connector will prompt for a password. If omitted, the connector will use the configured API key to create the content source.
Performs a incremental sync operation.
Performs a full sync operation.
Performs a deletion sync operation.
Performs a permission sync operation.
Configure any of the following settings for a connector:
The domain name where your network drives is present.
network_drive.domain: "Domain3"
The username of the account for network drives. See Gather network drives details.
network_drive.username: bill.gates
The password of the account to be used for crawling the network drives. See Gather network drives details.
network_drive.password: 'L,Ct%ddUvNTE5zk;GsDk^2w)(;,!aJ|Ip!?Oi'
The network drives path the connector will crawl to fetch files.
network_drive.path: 'path1/drives/org1/ws'
The name of the server where the network drive is found. (Note that server name may not be the same as the hostname. On Windows, you can get the machine name from the System panel.)
network_drive.server_name: 'elastic'
The server ip where network drives is hosted.
network_drive.server_ip: '1.2.3.4'
The client machine name can be a random client name of up to 15 characters.
client_machine.name: 'Workplace'
The Workplace Search API key. See Create a Workplace Search API key.
enterprise_search.api_key: 'zvksftxrudcitxa7ris4328b'
The ID of the Workplace Search content source. See Create a Workplace Search content source.
enterprise_search.source_id: '12345678909876543ab21012a'
The Enterprise Search base URL for your Elastic deployment. Note: To use Enterprise Search version 8 or above, port is required.
enterprise_search.host_url: https://my-deployment.ent.europe-west1.gcp.cloud.es.io:9243
Whether the connector should sync document-level permissions (DLP) from network drives.
By default, it is set to Yes
i.e. the connector will try to sync document-level permissions.
enable_document_permission: Yes
Specifies which files should be indexed based on their size or path template in network drives.
Note: By default, the connector includes files listed in the include field and excludes files listed in the exclude field.
include:
size:
path_template: ["**/*.txt", "**/*.contact", "**/*.docx", "**/*.json", "**/*.png", "**/*.jpg", "**/*.jpeg", "**/*.py", "**/*.yml", "**/*.md", "**/*.ini", "**/*.sh", "**/*.rst", "**/*.pdf", "**/*.rtf", "**/*.ppt", "**/*.file"]
exclude:
size: [">10000000"]
path_template:
A UTC timestamp the connector uses to determine which objects to extract and sync from network drives. Determines the starting point for a full sync.
Supports the following time format YYYY-MM-DDTHH:MM:SSZ
start_time: 2022-04-01T04:44:16Z
By default, it is set to unix epoch time 1970-01-01T00:00:00Z
A UTC timestamp the connector uses to determine which objects to extract and sync from network drives. Determines the stopping point for a full sync. It supports the following time format YYYY-MM-DDTHH:MM:SSZ
end_time: 2022-04-01T04:44:16Z
By default, it is set to current execution time.
The level or severity that determines the threshold for logging a message. One of the following values:
DEBUG
INFO
(default)WARN
ERROR
log_level: INFO
By default, it is set to INFO
.
The number of retries to perform when there is a server error. The connector applies an exponential backoff algorithm to retries.
retry_count: 3
By default, it is set to 3
.
The number of threads the connector will run in parallel when fetching documents from the network drive. By default, the connector uses 5 threads.
network_drives_sync_thread_count: 5
The number of threads the connector will run in parallel when indexing documents into the Enterprise Search instance. By default, the connector uses 5 threads.
enterprise_search_sync_thread_count: 5
For the Linux distribution with atleast 2 GB RAM and 4 vCPUs, you can increase the thread counts if the overall CPU and RAM are under utilized i.e. below 60-70%.
The pathname of the CSV file containing the user identity mappings for document-level permissions (DLP).
network_drives_enterprise_search_user_mapping: 'C:/Users/banon/connector/identity_mappings.csv'
The network drives connector package is compatible with Elastic deployments that meet the following criteria:
- Elastic Enterprise Search version greater than or equal to 7.13.0.
- An Elastic subscription that supports this feature. Refer to the Elastic subscriptions pages for Elastic Cloud and self-managed deployments.
Each network drives connector requires a runtime environment that satisfies the following dependencies:
- Windows, MacOS, or Linux server. The connector has been tested with CentOS 7, MacOS Monterey v12.0.1, and Windows 10.
- Python version 3.6 or later.
- To extract content from images: Java version 7 or later, and
tesseract
command installed and added toPATH
- To schedule recurring syncs: a job scheduler, such as
cron
ℹ️ Currently, the connector will only work for Windows and Linux shared SMB folders.