|
| 1 | +--- |
| 2 | +title: Deploy ElasticSearch on a development virtual machine in Azure |
| 3 | +description: Install the Elastic Stack (ELK) onto a development Linux VM in Azure |
| 4 | +services: virtual-machines |
| 5 | +author: rloutlaw |
| 6 | +manager: justhe |
| 7 | +ms.service: azure-virtual-machines |
| 8 | +ms.collection: linux |
| 9 | +ms.devlang: azurecli |
| 10 | +ms.custom: devx-track-azurecli, linux-related-content, innovation-engine |
| 11 | +ms.topic: how-to |
| 12 | +ms.date: 10/11/2017 |
| 13 | +ms.author: routlaw |
| 14 | +--- |
| 15 | + |
| 16 | +# Install the Elastic Stack (ELK) on an Azure VM |
| 17 | + |
| 18 | +**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets |
| 19 | + |
| 20 | +This article walks you through how to deploy [Elasticsearch](https://www.elastic.co/products/elasticsearch), [Logstash](https://www.elastic.co/products/logstash), and [Kibana](https://www.elastic.co/products/kibana), on an Ubuntu VM in Azure. To see the Elastic Stack in action, you can optionally connect to Kibana and work with some sample logging data. |
| 21 | + |
| 22 | +Additionally, you can follow the [Deploy Elastic on Azure Virtual Machines](/training/modules/deploy-elastic-azure-virtual-machines/) module for a more guided tutorial on deploying Elastic on Azure Virtual Machines. |
| 23 | + |
| 24 | +In this tutorial you learn how to: |
| 25 | + |
| 26 | +> [!div class="checklist"] |
| 27 | +> * Create an Ubuntu VM in an Azure resource group |
| 28 | +> * Install Elasticsearch, Logstash, and Kibana on the VM |
| 29 | +> * Send sample data to Elasticsearch with Logstash |
| 30 | +> * Open ports and work with data in the Kibana console |
| 31 | +
|
| 32 | +This deployment is suitable for basic development with the Elastic Stack. For more on the Elastic Stack, including recommendations for a production environment, see the [Elastic documentation](https://www.elastic.co/guide/index.html) and the [Azure Architecture Center](/azure/architecture/elasticsearch/). |
| 33 | + |
| 34 | +[!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] |
| 35 | + |
| 36 | +- This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. |
| 37 | + |
| 38 | +## Create a resource group |
| 39 | + |
| 40 | +In this section, environment variables are declared for use in subsequent commands. A random suffix is appended to resource names for uniqueness. |
| 41 | + |
| 42 | +```bash |
| 43 | +export RANDOM_SUFFIX=$(openssl rand -hex 3) |
| 44 | +export RESOURCE_GROUP="myResourceGroup$RANDOM_SUFFIX" |
| 45 | +export REGION="eastus2" |
| 46 | +az group create --name $RESOURCE_GROUP --location $REGION |
| 47 | +``` |
| 48 | + |
| 49 | +Results: |
| 50 | + |
| 51 | +<!-- expected_similarity=0.3 --> |
| 52 | +```JSON |
| 53 | +{ |
| 54 | + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxxxxx", |
| 55 | + "location": "eastus", |
| 56 | + "managedBy": null, |
| 57 | + "name": "myResourceGroupxxxxxx", |
| 58 | + "properties": { |
| 59 | + "provisioningState": "Succeeded" |
| 60 | + }, |
| 61 | + "tags": null, |
| 62 | + "type": "Microsoft.Resources/resourceGroups" |
| 63 | +} |
| 64 | +``` |
| 65 | + |
| 66 | +## Create a virtual machine |
| 67 | + |
| 68 | +This section creates a VM with a unique name, while also generating SSH keys if they do not already exist. A random suffix is appended to ensure uniqueness. |
| 69 | + |
| 70 | +```bash |
| 71 | +export VM_NAME="myVM$RANDOM_SUFFIX" |
| 72 | +az vm create \ |
| 73 | + --resource-group $RESOURCE_GROUP \ |
| 74 | + --name $VM_NAME \ |
| 75 | + --image Ubuntu2204 \ |
| 76 | + --admin-username azureuser \ |
| 77 | + --generate-ssh-keys |
| 78 | +``` |
| 79 | + |
| 80 | +When the VM has been created, the Azure CLI shows information similar to the following example. Take note of the publicIpAddress. This address is used to access the VM. |
| 81 | + |
| 82 | +Results: |
| 83 | + |
| 84 | +<!-- expected_similarity=0.3 --> |
| 85 | +```JSON |
| 86 | +{ |
| 87 | + "fqdns": "", |
| 88 | + "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxxxxx/providers/Microsoft.Compute/virtualMachines/myVMxxxxxx", |
| 89 | + "location": "eastus", |
| 90 | + "macAddress": "xx:xx:xx:xx:xx:xx", |
| 91 | + "powerState": "VM running", |
| 92 | + "privateIpAddress": "10.0.0.4", |
| 93 | + "publicIpAddress": "x.x.x.x", |
| 94 | + "resourceGroup": "$RESOURCE_GROUP" |
| 95 | +} |
| 96 | +``` |
| 97 | + |
| 98 | +## SSH into your VM |
| 99 | + |
| 100 | +If you don't already know the public IP address of your VM, run the following command to list it: |
| 101 | + |
| 102 | +```azurecli-interactive |
| 103 | +az network public-ip list --resource-group $RESOURCE_GROUP --query [].ipAddress |
| 104 | +``` |
| 105 | + |
| 106 | +Use the following command to create an SSH session with the virtual machine. Substitute the correct public IP address of your virtual machine. In this example, the IP address is *40.68.254.142*. |
| 107 | + |
| 108 | +```bash |
| 109 | +export PUBLIC_IP_ADDRESS=$(az network public-ip list --resource-group $RESOURCE_GROUP --query [].ipAddress -o tsv) |
| 110 | +``` |
| 111 | + |
| 112 | +## Install the Elastic Stack |
| 113 | + |
| 114 | +In this section, you import the Elasticsearch signing key and update your APT sources list to include the Elastic package repository. This is followed by installing the Java runtime environment which is required for the Elastic Stack components. |
| 115 | + |
| 116 | +```bash |
| 117 | +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " |
| 118 | +wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - |
| 119 | +echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list |
| 120 | +" |
| 121 | +``` |
| 122 | + |
| 123 | +Install the Java Virtual Machine on the VM and configure the JAVA_HOME variable: |
| 124 | + |
| 125 | +```bash |
| 126 | +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " |
| 127 | +sudo apt install -y openjdk-8-jre-headless |
| 128 | +export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 |
| 129 | +" |
| 130 | +``` |
| 131 | + |
| 132 | +Run the following command to update Ubuntu package sources and install Elasticsearch, Kibana, and Logstash. |
| 133 | + |
| 134 | +```bash |
| 135 | +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " |
| 136 | + wget -qO elasticsearch.gpg https://artifacts.elastic.co/GPG-KEY-elasticsearch |
| 137 | + sudo mv elasticsearch.gpg /etc/apt/trusted.gpg.d/ |
| 138 | +
|
| 139 | + echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list |
| 140 | +
|
| 141 | + sudo apt update |
| 142 | + |
| 143 | + # Now install the ELK stack |
| 144 | + sudo apt install -y elasticsearch kibana logstash |
| 145 | +" |
| 146 | +``` |
| 147 | + |
| 148 | +> [!NOTE] |
| 149 | +> Detailed installation instructions, including directory layouts and initial configuration, are maintained in [Elastic's documentation](https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html) |
| 150 | +
|
| 151 | +## Start Elasticsearch |
| 152 | + |
| 153 | +Start Elasticsearch on your VM with the following command: |
| 154 | + |
| 155 | +```bash |
| 156 | +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " |
| 157 | +sudo systemctl start elasticsearch.service |
| 158 | +" |
| 159 | +``` |
| 160 | + |
| 161 | +This command produces no output, so verify that Elasticsearch is running on the VM with this curl command: |
| 162 | + |
| 163 | +```bash |
| 164 | +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " |
| 165 | +sleep 11 |
| 166 | +sudo curl -XGET 'localhost:9200/' |
| 167 | +" |
| 168 | +``` |
| 169 | + |
| 170 | +If Elasticsearch is running, you see output like the following: |
| 171 | + |
| 172 | +Results: |
| 173 | + |
| 174 | +<!-- expected_similarity=0.3 --> |
| 175 | +```json |
| 176 | +{ |
| 177 | + "name" : "w6Z4NwR", |
| 178 | + "cluster_name" : "elasticsearch", |
| 179 | + "cluster_uuid" : "SDzCajBoSK2EkXmHvJVaDQ", |
| 180 | + "version" : { |
| 181 | + "number" : "5.6.3", |
| 182 | + "build_hash" : "1a2f265", |
| 183 | + "build_date" : "2017-10-06T20:33:39.012Z", |
| 184 | + "build_snapshot" : false, |
| 185 | + "lucene_version" : "6.6.1" |
| 186 | + }, |
| 187 | + "tagline" : "You Know, for Search" |
| 188 | +} |
| 189 | +``` |
| 190 | + |
| 191 | +## Start Logstash and add data to Elasticsearch |
| 192 | + |
| 193 | +Start Logstash with the following command: |
| 194 | + |
| 195 | +```bash |
| 196 | +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " |
| 197 | +sudo systemctl start logstash.service |
| 198 | +" |
| 199 | +``` |
| 200 | + |
| 201 | +Test Logstash to make sure it's working correctly: |
| 202 | + |
| 203 | +```bash |
| 204 | +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " |
| 205 | +# Time-limited test with file input instead of stdin |
| 206 | +sudo timeout 11s /usr/share/logstash/bin/logstash -e 'input { file { path => "/var/log/syslog" start_position => "end" sincedb_path => "/dev/null" stat_interval => "1 second" } } output { stdout { codec => json } }' || echo "Logstash test completed" |
| 207 | +" |
| 208 | +``` |
| 209 | + |
| 210 | +This is a basic Logstash [pipeline](https://www.elastic.co/guide/en/logstash/5.6/pipeline.html) that echoes standard input to standard output. |
| 211 | + |
| 212 | +Set up Logstash to forward the kernel messages from this VM to Elasticsearch. To create the Logstash configuration file, run the following command which writes the configuration to a new file called vm-syslog-logstash.conf: |
| 213 | + |
| 214 | +```bash |
| 215 | +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " |
| 216 | +cat << 'EOF' > vm-syslog-logstash.conf |
| 217 | +input { |
| 218 | + stdin { |
| 219 | + type => "stdin-type" |
| 220 | + } |
| 221 | +
|
| 222 | + file { |
| 223 | + type => "syslog" |
| 224 | + path => [ "/var/log/*.log", "/var/log/*/*.log", "/var/log/messages", "/var/log/syslog" ] |
| 225 | + start_position => "beginning" |
| 226 | + } |
| 227 | +} |
| 228 | +
|
| 229 | +output { |
| 230 | +
|
| 231 | + stdout { |
| 232 | + codec => rubydebug |
| 233 | + } |
| 234 | + elasticsearch { |
| 235 | + hosts => "localhost:9200" |
| 236 | + } |
| 237 | +} |
| 238 | +EOF |
| 239 | +" |
| 240 | +``` |
| 241 | + |
| 242 | +Test this configuration and send the syslog data to Elasticsearch: |
| 243 | + |
| 244 | +```bash |
| 245 | +# Run Logstash with the configuration for 60 seconds |
| 246 | +sudo timeout 60s /usr/share/logstash/bin/logstash -f vm-syslog-logstash.conf & |
| 247 | +LOGSTASH_PID=$! |
| 248 | + |
| 249 | +# Wait for data to be processed |
| 250 | +echo "Processing logs for 60 seconds..." |
| 251 | +sleep 65 |
| 252 | + |
| 253 | +# Verify data was sent to Elasticsearch with proper error handling |
| 254 | +echo "Verifying data in Elasticsearch..." |
| 255 | +ES_COUNT=$(sudo curl -s -XGET 'localhost:9200/_cat/count?v' | tail -n 1 | awk '{print $3}' 2>/dev/null || echo "0") |
| 256 | + |
| 257 | +# Make sure ES_COUNT is a number or default to 0 |
| 258 | +if ! [[ "$ES_COUNT" =~ ^[0-9]+$ ]]; then |
| 259 | + ES_COUNT=0 |
| 260 | + echo "Warning: Could not get valid document count from Elasticsearch" |
| 261 | +fi |
| 262 | + |
| 263 | +echo "Found $ES_COUNT documents in Elasticsearch" |
| 264 | + |
| 265 | +if [ "$ES_COUNT" -gt 0 ]; then |
| 266 | + echo "✅ Logstash successfully sent data to Elasticsearch" |
| 267 | +else |
| 268 | + echo "❌ No data found in Elasticsearch, there might be an issue with Logstash configuration" |
| 269 | +fi |
| 270 | +``` |
| 271 | + |
| 272 | +You see the syslog entries in your terminal echoed as they are sent to Elasticsearch. Use CTRL+C to exit out of Logstash once you've sent some data. |
| 273 | + |
| 274 | +## Start Kibana and visualize the data in Elasticsearch |
| 275 | + |
| 276 | +Edit the Kibana configuration file (/etc/kibana/kibana.yml) and change the IP address Kibana listens on so you can access it from your web browser: |
| 277 | + |
| 278 | +```text |
| 279 | +server.host: "0.0.0.0" |
| 280 | +``` |
| 281 | + |
| 282 | +Start Kibana with the following command: |
| 283 | + |
| 284 | +```bash |
| 285 | +ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no " |
| 286 | +sudo systemctl start kibana.service |
| 287 | +" |
| 288 | +``` |
| 289 | + |
| 290 | +Open port 5601 from the Azure CLI to allow remote access to the Kibana console: |
| 291 | + |
| 292 | +```azurecli-interactive |
| 293 | +az vm open-port --port 5601 --resource-group $RESOURCE_GROUP --name $VM_NAME |
| 294 | +``` |
| 295 | + |
| 296 | +## Next steps |
| 297 | + |
| 298 | +In this tutorial, you deployed the Elastic Stack into a development VM in Azure. You learned how to: |
| 299 | + |
| 300 | +> [!div class="checklist"] |
| 301 | +> * Create an Ubuntu VM in an Azure resource group |
| 302 | +> * Install Elasticsearch, Logstash, and Kibana on the VM |
| 303 | +> * Send sample data to Elasticsearch from Logstash |
| 304 | +> * Open ports and work with data in the Kibana console |
0 commit comments