Imperva DSFKit is the official Terraform toolkit designed to automate the deployment of Imperva's Data Security Fabric (DSF Hub & Agentless Gateway) (formerly Sonar). The DSF Hub can be easily deployed by following the steps in this guide which are currently available for deployments on AWS only.
In the near future, DSFKit will enable you to deploy the full suite of DSF products, including DAM and DRA, and will support the other major public clouds.
This guide is intended for Imperva Sales Engineers (SE) for the purpose of Proof-of-Concept (POC) demonstrations and preparing for these demonstrations, aka, Lab.
It is also intended for Imperva Professional Services (PS) and customers for actual deployments of DSF.
This guide covers the following main topics. Additional guides are referenced throughout, as listed in the Quick Links section below.
- How to deploy Imperva’s Data Security Fabric (DSF) with step-by-step instructions.
- How to verify that the deployment was successful using the DSFKit output.
- How to undeploy DSF with step-by-step instructions.
This guide uses several text styles for an enhanced readability and several call-out features. Learn about their meaning from the table below.
Convention | Description |
Code, commands or user input |
|
Instruction to change code, commands or user input |
|
Placeholder |
${placeholder}: Used within commands to indicate that the user should replace the placehodler with a value, including the $, { and }. |
Hyperlinks | Clickable URLs embedded within the guide are blue and underlined. E.g., www.imperva.com |
This guide references the following information and links, some of which are available via the Documention Portal on the Imperva website: https://docs.imperva.com. (Login required)
Link | Details |
Data Security Fabric v1.0 | DSF Overview |
Sonar v4.10 | DSF Components Overview |
Imperva Terraform Modules Registry | |
DSFKit GitHub Repository | |
Download Git | |
Download Terraform | |
Open Terraform Cloud Account - Request Form | Grants access for a specific e-mail address to Imperva's Terraform Cloud account. Required for Terraform Cloud Deployment Mode |
Open TAR AWS S3 Bucket - Request Form | Grants access for a specific AWS account to Imperva's AWS S3 bucket where Sonar's installation tarball can be downloaded |
The following table lists the released DSFKit versions, their release date and a high-level summary of each version's content.
Date | Version | Details |
3 Nov 2022 | 1.0.0 | First release for SEs. Beta. |
20 Nov 2022 | 1.1.0 | Second Release for SEs. Beta. |
3 Jan 2023 | 1.2.0 | 1. Added multi accounts example. 2. Changed modules interface. |
19 Jan 2023 | 1.3.4 | 1. Refactored directory structure. 2. Released to terraform registry. 3. Supported DSF Hub / Agentless Gateway on RedHat 7 ami. 4. Restricted permissions for Sonar installation. 5. Added the module's version to the examples. |
26 Jan 2023 | 1.3.5 | 1. Enabled creating RDS MsSQL with synthetic data for POC purposes. 2. Fixed manual and automatic installer machine deployments. |
5 Feb 2023 | 1.3.6 | Supported SSH proxy for DSF Hub / Agentless Gateway in modules: hub, agentless-gw, federation, poc-db-onboarder. |
28 Feb 2023 | 1.3.7 |
1. Added the option to provide a custom security group id for the DSF Hub and the Agentless Gateway via the 'security_group_id' variable.
2. Restricted network resources and general IAM permissions. 3. Added a new installation example - single_account_deployment. 4. Added the minimum required Terraform version to all modules. 5. Added the option to provide EC2 AMI filter details for the DSF Hub and the Agentless Gateway via the 'ami' variable. 6. For user-provided AMI for the DSF node (DSF Hub and the Agentless Gateway) that denies execute access in '/tmp' folder, added the option to specify an alternative path via the 'terraform_script_path_folder' variable. 7. Passed the password of the DSF node via AWS Secrets Manager. 8. Added the option to provide a custom S3 bucket location for the Sonar binaries via the 'tarball_location' variable. 9. Bug fixes. |
Coming soon | 1. Revised the installer machine deployment mode. |
DSFKit offers several deployment modes:
-
CLI Deployment Mode: This mode offers a straightforward deployment option that relies on running a Terraform script on the deployment client's machine which must be a Linux machine.
For more details, refer to CLI Deployment Mode.
-
Terraform Cloud Deployment Mode: This mode makes use of Terraform Cloud, a service that exposes a dedicated UI to create and destroy resources via Terraform. This mode can be used in case we don't want to install any software on the deployment client's machine. It can be used to demo DSF on an Imperva AWS Account or on a customer’s AWS account (if the customer supplies credentials).
For more details, refer to Terraform Cloud Deployment Mode.
-
Installer Machine Deployment Mode: This mode is similar to the CLI mode except that the Terraform is run on an EC2 machine which the user creates, instead of on the deployment client's machine. This mode can be used if a Linux machine is not available, or DSFKit cannot be run on the available Linux machine, e.g., since it does not have permissions to access the deployment environment.
For more details, refer to Installer Machine Deployment Mode.
The first step in the deployment is to choose the deployment mode most appropriate to you. If you need more information to decide on your preferred mode, refer to the detailed instructions for each mode here.
Before using DSFKit to deploy DSF, it is necessary to complete the following steps:
- Create an AWS User with secret and access keys which comply with the required IAM permissions (see IAM Role section).
- The deployment requires access to the tarball containing the Sonar binaries. The tarball is located in a dedicated AWS S3 bucket owned by Imperva. Click here to request access to download this file.
- Only if you chose the CLI Deployment Mode, download Git here.
- Only if you chose the CLI Deployment Mode, download Terraform here. It is recommended on MacOS systems to use the "Package Manager" option during installation.
NOTE: It may take several hours for the access to be granted to AWS and Terraform Cloud in Steps 2 and 3.
An important thing to understand about the DSF deployment, is that there are many variations on what can be deployed, e.g., the number of Agentless Gateways, with or without HADR, the number of VPCs, etc.
We provide several of out-of-the-box Terraform recipes we call "examples" which are already configured to deploy common DSF environments. You can use the example as is, or customize it to accommodate your deployment requirements.
These examples can be found in the DSFKit GitHub Repository under the examples directory. Some examples are intended for Lab or POC and others for actual DSF deployments by Professional Services and customers.
For more details about each example, click on the example name.
Example | Purpose | Description | Download |
Basic Deployment | Lab/POC | A DSF deployment with a DSF Hub, an Agentless Gateway, federation, networking and onboarding of a MySQL DB. | basic_deployment.zip |
HADR Deployment | Lab/POC | A DSF deployment with a DSF Hub HADR, an Agentless Gateway, federation, networking and onboarding of a MySQL DB. | hadr_deployment.zip |
Single Account Deployment | PS/Customer | A DSF deployment with a DSF Hub HADR, an Agentless Gateway and federation. The DSF nodes (Hubs and Agentless Gateway) are in the same AWS account and the same region. It is mandatory to provide as input to this example the subnets to deploy the DSF nodes on. | single_account_deployment.zip |
Multi Account Deployment | PS/Customer | A DSF deployment with a DSF Hub, an Agentless Gateway and federation. The DSF nodes (Hub and Agentless Gateway) are in different AWS accounts. It is mandatory to provide as input to this example the subnets to deploy the DSF nodes on. | multi_account_deployment.zip |
If you are familiar with Terraform, you can go over the example code and see what it consists of. The examples make use of the building blocks of the DSFKit - the modules, which can be found in the Imperva Terraform Modules Registry. As a convention, the DSFKit modules' names have a 'dsf' prefix.
Feel free to fill out this form if you need help choosing or customizing an example to suit your needs.
When using DSFKit there is no need to manually download the DSF binaries, DSFKit will do that automatically based on the Sonar version specified in the Terraform example recipe.
The latest version of Sonar, 4.10, is recommended, and Sonar 4.9 and higher are supported.
For example: examples/poc/basic_deployment/variables.tf
variable "sonar_version" {
type = string
default = "4.10"
}
>>>> Change the Sonar version to the one you want to install
Make sure that the Sonar version you are using is supported by all the modules which are part of your deployment. To see which Sonar versions are supported by each module, refer to the specific module's README. (For example, DSF Hub module's README)
After you have chosen the deployment mode, follow the step-by-step instructions below to ensure a successful deployment. If you have any questions or issues during the deployment process, please contact Imperva Technical Support.
As mentioned in the Prerequisites, the DSF deployment requires access to the tarball containing the Sonar binaries. The tarball is located in a dedicated AWS S3 bucket owned by Imperva. Click here to request access to download this file.
This mode makes use of the Terraform Command Line Interface (CLI) to deploy and manage environments. Terraform CLI uses a bash script and therefore requires a Linux/Mac machine.
The first thing to do in this deployment mode is to download Terraform .
NOTE: Update the values for the required parameters to complete the installation: example_name, aws_access_key_id, aws_secret_access_key and region
-
Download the zip file of the example you've chosen (See the Choosing the Example/Recipe that Fits Your Use Case section) from the DSFKit GitHub Repository, e.g., if you choose the "basic_deployment" example, you should download basic_deployment.zip.
-
Unzip the zip file in CLI or using your operating system's UI. For example, in CLI:
unzip basic_deployment.zip >>>> Change this command depending on the example you chose
-
In CLI, navigate to the directory which contains the Terraform files. For example:
cd basic_deployment >>>> Change this command depending on the example you chose
-
Optionally make changes to the example's Terraform code to fit your use case. If you need help doing that, please contact Imperva Technical Support.
-
Terraform uses the AWS shell environment for AWS authentication. More details on how to authenticate with AWS are here.
For simplicity, in this example we will use environment variables:export AWS_ACCESS_KEY_ID=${access_key} export AWS_SECRET_ACCESS_KEY=${secret_key} export AWS_REGION=${region} >>>> Fill the values of the access_key, secret_key and region placeholders, e.g., export AWS_ACCESS_KEY_ID=5J5AVVNNHYY4DM6ZJ5N46.
-
Run:
terraform init
-
Run:
terraform apply -auto-approve
This should take about 30 minutes.
-
Extract the web console admin password and DSF URL using:
terraform output "dsf_hub_web_console"
-
Access the DSF Hub by entering the DSF URL into a web browser. Enter “admin” as the username and the admin_password as the password outputted in the previous step.
The CLI Deployment is now complete and a functioning version of DSF is now available.
As mentioned in the Prerequisites, the DSF deployment requires access to the tarball containing the Sonar binaries. The tarball is located in a dedicated AWS S3 bucket owned by Imperva. Click here to request access to download this file.
This deployment mode uses the Terraform Cloud service, which allows deploying and managing deployments via a dedicated UI. Deploying the environment is easily triggered by clicking a button within the Terraform interface, which then pulls the required code from the Imperva GitHub repository and automatically runs the scripts remotely.
This deployment mode can be used to demonstrate DSF in a customer's Terraform Cloud account or the Imperva Terraform Cloud account, which is accessible for internal use (SEs, QA, Research, etc.), and can be used to deploy/undeploy POC environments on AWS accounts owned by Imperva.
It is required that you have access to a Terraform Cloud account. Any account may be used, whether the account is owned by Imperva or the customer. Click here to request access to Imperva's Terraform Cloud account.
If you want to use Imperva's Terraform Cloud account, the first thing to do is to request access here: Open Terraform Cloud Account - Request Form. Our internal Terraform Cloud account can only be used for demo purposes and not for customer deployments.
NOTE: Currently this deployment mode doesn't support customizing the chosen example's code.
-
Connect to Terraform Cloud: Connect to the desired Terraform Cloud account, either the internal Imperva account or a customer account if one is available.
-
Create a new workspace: Complete these steps to create a new workspace in Terraform Cloud that will be used for the DSF deployment.
-
Click the + New workspace button in the top navigation bar to open the Create a new Workspace page.
-
Choose Version Control Workflow from the workflow type options.
-
Choose imperva/dsfkit as the repository.
If this option is not displayed, type imperva/dsfkit in the “Filter” textbox. -
Name the workspace in the following format:
dsfkit-${customer_name}-${environment_name} >>>> Fill the values of the customer_name and environment_name placeholders, e.g., dsfkit-customer1-poc1
-
Enter the path to the example you've chosen (See the Choosing the Example/Recipe that Fits Your Use Case section), e.g., “examples/poc/basic_deployment”, into the Terraform working directory input field.
>>>> Change the directory in the above screenshot depending on the example you chose
-
To avoid automatic Terraform configuration changes when the GitHub repo updates, set the following values under “Run triggers”:
As displayed in the above screenshot, the Custom Regular Expression field value should be “23b82265”. -
Click “Create workspace” to finish and save the new DSFKit workspace.
-
-
Add the AWS variables: The next few steps will configure the required AWS variables.
-
Once the DSFKit workspace is created, click the "Go to workspace overview" button.
-
Add the following workspace variables by entering the name, value, category and sensitivity as listed below.
Variable Name Value Category Sensitive AWS_ACCESS_KEY_ID Your AWS credentials access key Environment variable True AWS_SECRET_ACCESS_KEY Your AWS credentials secret key Environment variable True AWS_REGION The AWS region you wish to deploy into Environment variable False
>>>> Change the AWS_REGION value in the above screenshot to the AWS region you want to deploy in
-
-
Run the Terraform: The following steps complete setting up the DSFKit workspace and running the example's Terraform code.
-
Click on the Actions dropdown button from the top navigation bar, and select the "Start new run" option from the list.
-
Enter a unique, alphanumeric name for the run, and click on the "Start run" button.
>>>> Change the "Reason for starting run" value in the above screenshot to a run name of your choosing
-
Wait for the run to complete, it should take about 30 minutes and is indicated by "Apply finished".
-
-
Inspect the run result: These steps provide the necessary information to view the run output, and access the deployed DSF.
-
Scroll down the "Apply Finished" area to see which resources were created.
-
Scroll to the bottom to find the "State versions created" link which can be helpful to investigate issues.
-
Scroll up to view the "Outputs" of the run which should be expanded already, and locate the "dsf_hub_web_console" JSON object. Copy the "public_url" and "admin_password" fields' values for later use.
-
Enter the "public_url" value you copied into a web browser to access the Imperva Data Security Fabric (DSF) login screen.
-
Sonar is installed with a self-signed certificate, as a result, when opening the web page you may see a warning notification. For example, in Google Chrome, click "Proceed to domain.com (unsafe)".
-
Enter “admin” into the Username field and the "admin_password" value you copied into the Password field. Click "Sign In".
-
The Terraform Cloud Deployment is now complete and a functioning version of DSF is now available.
As mentioned in the Prerequisites, the DSF deployment requires access to the tarball containing the Sonar binaries. The tarball is located in a dedicated AWS S3 bucket owned by Imperva. Click here to request access to download this file.
This mode is similar to the CLI mode except that the Terraform is run on an EC2 machine which the user creates, instead of on the deployment client's machine. This mode can be used if a Linux machine is not available, or DSFKit cannot be run on the available Linux machine, e.g., since it does not have permissions to access the deployment environment.
-
In AWS, choose a region for the installer machine while keeping in mind that the machine should have access to the DSF environment that you want to deploy, and preferably be in proximity to it.
-
Launch an Instance: Search for RHEL-8.6.0_HVM-20220503-x86_64-2-Hourly2-GP2 image and click “enter”:
-
Select t2.medium 'Instance type', or t3.medium if T2 is not available in the region.
-
Create or select an existing 'Key pair' that you will later use to run SSH to the installer machine.
-
In the Network settings panel - make your configurations while keeping in mind that the installer machine should have access to the DSF environment that you want to deploy, and that the deployment's client machine should have access to the installer machine.
-
Copy and paste the contents of this bash script into the User data textbox.
-
Click on Launch Instance. At this stage, the installer machine is initializing and downloading the necessary dependencies.
-
When launching is completed, run SSH to the installer machine from the deployment client's machine:
ssh -i ${key_pair_file} ec2-user@${installer_machine_public_ip} >>>> Replace the key_pair_file with the name of the file from step 4, and the installer_machine_public_ip with the public IP of the installer machine which should now be available in the AWS EC2 console. E.g., ssh -i a_key_pair.pem ec2-user@1.2.3.4
NOTE: You may need to decrease the access privileges of the key_pair_file in order to be able to use it in for ssh. For example:
chmode 400 a_key_pair.pem
-
Download the zip file of the example you've chosen (See the Choosing the Example/Recipe that Fits Your Use Case section) from the DSFKit GitHub Repository, e.g., if you choose the "basic_deployment" example, you should download basic_deployment.zip. Run:
wget https://github.com/imperva/dsfkit/raw/1.3.7/examples/poc/basic_deployment/basic_deployment.zip or wget https://github.com/imperva/dsfkit/raw/1.3.7/examples/poc/hadr_deployment/hadr_deployment.zip or wget https://github.com/imperva/dsfkit/raw/1.3.7/examples/installation/single_account_deployment/single_account_deployment.zip or wget https://github.com/imperva/dsfkit/raw/1.3.7/examples/installation/multi_account_deployment/multi_account_deployment.zip
-
Unzip the zip file:
unzip basic_deployment.zip >>>> Change this command depending on the example you chose
-
Continue by following the CLI Deployment Mode beginning at step 3.
IMPORTANT: Do not destroy the installer machine until you are done and have destroyed all other resources. Otherwise, there may be leftovers in your AWS account that will require manual deletion which is a tedious process. For more information see the Installer Machine Undeployment Mode section.
The Installer Machine Deployment is now completed and a functioning version of DSF is now available.
To be able to create AWS resources inside any AWS Account, you need to provide an AWS User with the required permissions in order to run DSFKit Terraform. The permissions are separated to 3 different policies. Use the relevant policies according to your needs:
- For general required permissions such as create an EC2, security group, etc., use the permissions specified here - general required permissions.
- In order to create network resources such as VPC, NAT Gateway, Internet Gateway etc., use the permissions specified here - create network resources permissions.
- In order to onboard a MySQL RDS with CloudWatch configured, use the permissions specified here - onboard MySQL RDS permissions.
- In order to onboard a MsSQL RDS with audit configured and with synthetic data, use the permissions specified here - onboard MsSQL RDS with synthetic data permissions.
NOTE: The permissions specified in option 2 are irrelevant for customers who prefer to use their own network objects, such as VPC, NAT Gateway, Internet Gateway, etc.
Depending on the deployment mode you chose, follow the undeployment instructions of the same mode to completely remove Imperva DSF from AWS.
The undeployment process should be followed whether the deployment was successful or not. In case of failure, the Terraform may have deployed some resources before failing, and want these removed.
-
Navigate to the directory which contains the Terraform files. For example:
cd basic_deployment >>>> Change this command depending on the example you chose
-
Terraform uses the AWS shell environment for AWS authentication. More details on how to authenticate with AWS are here.
For simplicity, in this example we will use environment variables:export AWS_ACCESS_KEY_ID=${access_key} export AWS_SECRET_ACCESS_KEY=${secret_key} export AWS_REGION=${region} >>>> Fill the values of the access_key, secret_key and region placeholders, e.g., export AWS_ACCESS_KEY_ID=5J5AVVNNHYY4DM6ZJ5N46.
-
Run:
terraform destroy -auto-approve
-
To undeploy the DSF deployment, click on Settings and find "Destruction and Deletion" from the navigation menu to open the "Destroy infrastructure" page. Ensure that the "Allow destroy plans" toggle is selected, and click on the Queue Destroy Plan button to begin.
-
The DSF deployment is now destroyed and the workspace may be re-used if needed. If this workspace is not being re-used, it may be removed with “Force delete from Terraform Cloud” that can be found under Settings.
NOTE: Do not remove the workspace before the deployment is completely destroyed. Doing so may lead to leftovers in your AWS account that will require manual deletion which is a tedious process.
-
Run SSH to installer machine from the deployment client's machine:
ssh -i ${key_pair_file} ec2-user@${installer_machine_public_ip} >>>> Fill the values of the key_pair_file and installer_machine_public_ip placeholders (See <a href="https://github.com/imperva/dsfkit/tree/1.3.7#installer-machine-deployment-mode">)
-
Continue by following the CLI Undeployment Mode steps.
-
Wait for the environment to be destroyed.
-
Terminate the EC2 installer machine via the AWS Console.
Review the following issues and troubleshooting remediations.
Title | Error message | Remediation |
Vpc quota exceeded | error creating EC2 VPC: VpcLimitExceeded: The maximum number of VPCs has been reached | Remove unneeded vpc via vpc dashboard, or increase vpc quota via this page and run again |
EIP quota exceeded | Error creating EIP: AddressLimitExceeded: The maximum number of addresses has been reached | Remove unneeded elastic ip via this dashboard, or increase elastic ip quota via this page and run again |
AWS internal glitches | Error: creating EC2 Instance: InvalidNetworkInterfaceID.NotFound: The networkInterface ID 'eni-xxx does not exist | Rerun “terraform apply” to overcome aws internal sync issues |
AWS Option Groups quota exceeded | Error: "Cannot create more than 20 option groups". Remediation similar to the other exceeded errors | Remove unneeded Option Groups here, or increase elastic ip quota via this page and run again |