Skip to content

Commit 6613502

Browse files
author
naman-msft
committed
updated docs
1 parent 30d7205 commit 6613502

File tree

12 files changed

+1510
-144
lines changed

12 files changed

+1510
-144
lines changed

scenarios/azure-aks-docs/articles/aks/node-image-upgrade.md

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -10,16 +10,6 @@ author: schaffererin
1010
ms.author: schaffererin
1111
---
1212

13-
## Environment Variables
14-
15-
The following environment variables are declared and will be used in subsequent code blocks. They replace the placeholder parameters in the original document with standardized variable names.
16-
17-
```bash
18-
export AKS_NODEPOOL="nodepool1"
19-
export AKS_CLUSTER="apache-airflow-aks"
20-
export AKS_RESOURCE_GROUP="apache-airflow-rg"
21-
```
22-
2313
# Upgrade Azure Kubernetes Service (AKS) node images
2414

2515
Azure Kubernetes Service (AKS) regularly provides new node images, so it's beneficial to upgrade your node images frequently to use the latest AKS features. Linux node images are updated weekly, and Windows node images are updated monthly. Image upgrade announcements are included in the [AKS release notes](https://github.com/Azure/AKS/releases), and it can take up to a week for these updates to be rolled out across all regions. You can also perform node image upgrades automatically and schedule them using planned maintenance. For more information, see [Automatically upgrade node images][auto-upgrade-node-image].
Lines changed: 304 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,304 @@
1+
---
2+
title: Deploy ElasticSearch on a development virtual machine in Azure
3+
description: Install the Elastic Stack (ELK) onto a development Linux VM in Azure
4+
services: virtual-machines
5+
author: rloutlaw
6+
manager: justhe
7+
ms.service: azure-virtual-machines
8+
ms.collection: linux
9+
ms.devlang: azurecli
10+
ms.custom: devx-track-azurecli, linux-related-content, innovation-engine
11+
ms.topic: how-to
12+
ms.date: 10/11/2017
13+
ms.author: routlaw
14+
---
15+
16+
# Install the Elastic Stack (ELK) on an Azure VM
17+
18+
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
19+
20+
This article walks you through how to deploy [Elasticsearch](https://www.elastic.co/products/elasticsearch), [Logstash](https://www.elastic.co/products/logstash), and [Kibana](https://www.elastic.co/products/kibana), on an Ubuntu VM in Azure. To see the Elastic Stack in action, you can optionally connect to Kibana and work with some sample logging data.
21+
22+
Additionally, you can follow the [Deploy Elastic on Azure Virtual Machines](/training/modules/deploy-elastic-azure-virtual-machines/) module for a more guided tutorial on deploying Elastic on Azure Virtual Machines.
23+
24+
In this tutorial you learn how to:
25+
26+
> [!div class="checklist"]
27+
> * Create an Ubuntu VM in an Azure resource group
28+
> * Install Elasticsearch, Logstash, and Kibana on the VM
29+
> * Send sample data to Elasticsearch with Logstash
30+
> * Open ports and work with data in the Kibana console
31+
32+
This deployment is suitable for basic development with the Elastic Stack. For more on the Elastic Stack, including recommendations for a production environment, see the [Elastic documentation](https://www.elastic.co/guide/index.html) and the [Azure Architecture Center](/azure/architecture/elasticsearch/).
33+
34+
[!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
35+
36+
- This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
37+
38+
## Create a resource group
39+
40+
In this section, environment variables are declared for use in subsequent commands. A random suffix is appended to resource names for uniqueness.
41+
42+
```bash
43+
export RANDOM_SUFFIX=$(openssl rand -hex 3)
44+
export RESOURCE_GROUP="myResourceGroup$RANDOM_SUFFIX"
45+
export REGION="eastus2"
46+
az group create --name $RESOURCE_GROUP --location $REGION
47+
```
48+
49+
Results:
50+
51+
<!-- expected_similarity=0.3 -->
52+
```JSON
53+
{
54+
"id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxxxxx",
55+
"location": "eastus",
56+
"managedBy": null,
57+
"name": "myResourceGroupxxxxxx",
58+
"properties": {
59+
"provisioningState": "Succeeded"
60+
},
61+
"tags": null,
62+
"type": "Microsoft.Resources/resourceGroups"
63+
}
64+
```
65+
66+
## Create a virtual machine
67+
68+
This section creates a VM with a unique name, while also generating SSH keys if they do not already exist. A random suffix is appended to ensure uniqueness.
69+
70+
```bash
71+
export VM_NAME="myVM$RANDOM_SUFFIX"
72+
az vm create \
73+
--resource-group $RESOURCE_GROUP \
74+
--name $VM_NAME \
75+
--image Ubuntu2204 \
76+
--admin-username azureuser \
77+
--generate-ssh-keys
78+
```
79+
80+
When the VM has been created, the Azure CLI shows information similar to the following example. Take note of the publicIpAddress. This address is used to access the VM.
81+
82+
Results:
83+
84+
<!-- expected_similarity=0.3 -->
85+
```JSON
86+
{
87+
"fqdns": "",
88+
"id": "/subscriptions/xxxxx/resourceGroups/myResourceGroupxxxxxx/providers/Microsoft.Compute/virtualMachines/myVMxxxxxx",
89+
"location": "eastus",
90+
"macAddress": "xx:xx:xx:xx:xx:xx",
91+
"powerState": "VM running",
92+
"privateIpAddress": "10.0.0.4",
93+
"publicIpAddress": "x.x.x.x",
94+
"resourceGroup": "$RESOURCE_GROUP"
95+
}
96+
```
97+
98+
## SSH into your VM
99+
100+
If you don't already know the public IP address of your VM, run the following command to list it:
101+
102+
```azurecli-interactive
103+
az network public-ip list --resource-group $RESOURCE_GROUP --query [].ipAddress
104+
```
105+
106+
Use the following command to create an SSH session with the virtual machine. Substitute the correct public IP address of your virtual machine. In this example, the IP address is *40.68.254.142*.
107+
108+
```bash
109+
export PUBLIC_IP_ADDRESS=$(az network public-ip list --resource-group $RESOURCE_GROUP --query [].ipAddress -o tsv)
110+
```
111+
112+
## Install the Elastic Stack
113+
114+
In this section, you import the Elasticsearch signing key and update your APT sources list to include the Elastic package repository. This is followed by installing the Java runtime environment which is required for the Elastic Stack components.
115+
116+
```bash
117+
ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no "
118+
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
119+
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
120+
"
121+
```
122+
123+
Install the Java Virtual Machine on the VM and configure the JAVA_HOME variable:
124+
125+
```bash
126+
ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no "
127+
sudo apt install -y openjdk-8-jre-headless
128+
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
129+
"
130+
```
131+
132+
Run the following command to update Ubuntu package sources and install Elasticsearch, Kibana, and Logstash.
133+
134+
```bash
135+
ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no "
136+
wget -qO elasticsearch.gpg https://artifacts.elastic.co/GPG-KEY-elasticsearch
137+
sudo mv elasticsearch.gpg /etc/apt/trusted.gpg.d/
138+
139+
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list
140+
141+
sudo apt update
142+
143+
# Now install the ELK stack
144+
sudo apt install -y elasticsearch kibana logstash
145+
"
146+
```
147+
148+
> [!NOTE]
149+
> Detailed installation instructions, including directory layouts and initial configuration, are maintained in [Elastic's documentation](https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html)
150+
151+
## Start Elasticsearch
152+
153+
Start Elasticsearch on your VM with the following command:
154+
155+
```bash
156+
ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no "
157+
sudo systemctl start elasticsearch.service
158+
"
159+
```
160+
161+
This command produces no output, so verify that Elasticsearch is running on the VM with this curl command:
162+
163+
```bash
164+
ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no "
165+
sleep 11
166+
sudo curl -XGET 'localhost:9200/'
167+
"
168+
```
169+
170+
If Elasticsearch is running, you see output like the following:
171+
172+
Results:
173+
174+
<!-- expected_similarity=0.3 -->
175+
```json
176+
{
177+
"name" : "w6Z4NwR",
178+
"cluster_name" : "elasticsearch",
179+
"cluster_uuid" : "SDzCajBoSK2EkXmHvJVaDQ",
180+
"version" : {
181+
"number" : "5.6.3",
182+
"build_hash" : "1a2f265",
183+
"build_date" : "2017-10-06T20:33:39.012Z",
184+
"build_snapshot" : false,
185+
"lucene_version" : "6.6.1"
186+
},
187+
"tagline" : "You Know, for Search"
188+
}
189+
```
190+
191+
## Start Logstash and add data to Elasticsearch
192+
193+
Start Logstash with the following command:
194+
195+
```bash
196+
ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no "
197+
sudo systemctl start logstash.service
198+
"
199+
```
200+
201+
Test Logstash to make sure it's working correctly:
202+
203+
```bash
204+
ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no "
205+
# Time-limited test with file input instead of stdin
206+
sudo timeout 11s /usr/share/logstash/bin/logstash -e 'input { file { path => "/var/log/syslog" start_position => "end" sincedb_path => "/dev/null" stat_interval => "1 second" } } output { stdout { codec => json } }' || echo "Logstash test completed"
207+
"
208+
```
209+
210+
This is a basic Logstash [pipeline](https://www.elastic.co/guide/en/logstash/5.6/pipeline.html) that echoes standard input to standard output.
211+
212+
Set up Logstash to forward the kernel messages from this VM to Elasticsearch. To create the Logstash configuration file, run the following command which writes the configuration to a new file called vm-syslog-logstash.conf:
213+
214+
```bash
215+
ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no "
216+
cat << 'EOF' > vm-syslog-logstash.conf
217+
input {
218+
stdin {
219+
type => "stdin-type"
220+
}
221+
222+
file {
223+
type => "syslog"
224+
path => [ "/var/log/*.log", "/var/log/*/*.log", "/var/log/messages", "/var/log/syslog" ]
225+
start_position => "beginning"
226+
}
227+
}
228+
229+
output {
230+
231+
stdout {
232+
codec => rubydebug
233+
}
234+
elasticsearch {
235+
hosts => "localhost:9200"
236+
}
237+
}
238+
EOF
239+
"
240+
```
241+
242+
Test this configuration and send the syslog data to Elasticsearch:
243+
244+
```bash
245+
# Run Logstash with the configuration for 60 seconds
246+
sudo timeout 60s /usr/share/logstash/bin/logstash -f vm-syslog-logstash.conf &
247+
LOGSTASH_PID=$!
248+
249+
# Wait for data to be processed
250+
echo "Processing logs for 60 seconds..."
251+
sleep 65
252+
253+
# Verify data was sent to Elasticsearch with proper error handling
254+
echo "Verifying data in Elasticsearch..."
255+
ES_COUNT=$(sudo curl -s -XGET 'localhost:9200/_cat/count?v' | tail -n 1 | awk '{print $3}' 2>/dev/null || echo "0")
256+
257+
# Make sure ES_COUNT is a number or default to 0
258+
if ! [[ "$ES_COUNT" =~ ^[0-9]+$ ]]; then
259+
ES_COUNT=0
260+
echo "Warning: Could not get valid document count from Elasticsearch"
261+
fi
262+
263+
echo "Found $ES_COUNT documents in Elasticsearch"
264+
265+
if [ "$ES_COUNT" -gt 0 ]; then
266+
echo "✅ Logstash successfully sent data to Elasticsearch"
267+
else
268+
echo "❌ No data found in Elasticsearch, there might be an issue with Logstash configuration"
269+
fi
270+
```
271+
272+
You see the syslog entries in your terminal echoed as they are sent to Elasticsearch. Use CTRL+C to exit out of Logstash once you've sent some data.
273+
274+
## Start Kibana and visualize the data in Elasticsearch
275+
276+
Edit the Kibana configuration file (/etc/kibana/kibana.yml) and change the IP address Kibana listens on so you can access it from your web browser:
277+
278+
```text
279+
server.host: "0.0.0.0"
280+
```
281+
282+
Start Kibana with the following command:
283+
284+
```bash
285+
ssh azureuser@$PUBLIC_IP_ADDRESS -o StrictHostKeyChecking=no "
286+
sudo systemctl start kibana.service
287+
"
288+
```
289+
290+
Open port 5601 from the Azure CLI to allow remote access to the Kibana console:
291+
292+
```azurecli-interactive
293+
az vm open-port --port 5601 --resource-group $RESOURCE_GROUP --name $VM_NAME
294+
```
295+
296+
## Next steps
297+
298+
In this tutorial, you deployed the Elastic Stack into a development VM in Azure. You learned how to:
299+
300+
> [!div class="checklist"]
301+
> * Create an Ubuntu VM in an Azure resource group
302+
> * Install Elasticsearch, Logstash, and Kibana on the VM
303+
> * Send sample data to Elasticsearch from Logstash
304+
> * Open ports and work with data in the Kibana console

scenarios/metadata.json

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1653,5 +1653,24 @@
16531653
}
16541654
]
16551655
}
1656+
},
1657+
{
1658+
"status": "active",
1659+
"key": "azure-compute-docs/articles/virtual-machines/linux/tutorial-elasticsearch.md",
1660+
"title": "Deploy ElasticSearch on a development virtual machine in Azure",
1661+
"description": "Install the Elastic Stack (ELK) onto a development Linux VM in Azure",
1662+
"stackDetails": "",
1663+
"sourceUrl": "https://raw.githubusercontent.com/MicrosoftDocs/executable-docs/main/scenarios/azure-compute-docs/articles/virtual-machines/linux/tutorial-elasticsearch.md",
1664+
"documentationUrl": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-elasticsearch",
1665+
"nextSteps": [
1666+
{
1667+
"title": "Create a Linux VM with the Azure CLI",
1668+
"url": "https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-cli"
1669+
}
1670+
],
1671+
"configurations": {
1672+
"permissions": [],
1673+
"configurableParams": []
1674+
}
16561675
}
16571676
]

scenarios/sql-docs/docs/linux/quickstart-install-connect-docker.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1091,7 +1091,7 @@ The following steps use **sqlcmd** outside of your container to connect to [!INC
10911091

10921092
::: zone pivot="cs1-bash"
10931093

1094-
```bash
1094+
```text
10951095
sudo sqlcmd -S <ip_address>,1433 -U <userid> -P "<password>"
10961096
```
10971097

@@ -1128,7 +1128,7 @@ The following steps use **sqlcmd** outside of your container to connect to [!INC
11281128

11291129
::: zone pivot="cs1-bash"
11301130

1131-
```bash
1131+
```text
11321132
sudo sqlcmd
11331133
```
11341134

@@ -1170,7 +1170,7 @@ If you want to remove the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.
11701170

11711171
::: zone pivot="cs1-bash"
11721172

1173-
```bash
1173+
```text
11741174
docker stop sql1
11751175
docker rm sql1
11761176
```
@@ -1201,7 +1201,7 @@ If you want to remove the [!INCLUDE [ssnoversion-md](../includes/ssnoversion-md.
12011201

12021202
::: zone pivot="cs1-bash"
12031203

1204-
```bash
1204+
```text
12051205
sudo sqlcmd delete --force
12061206
```
12071207

0 commit comments

Comments
 (0)