Skip to content

Commit afb733b

Browse files
inishchithjgbarah
authored andcommitted
[micro-mordred] Add tutorial for exectution of Micro-Mordred via
Docker-Compose Signed-off-by: inishchith <inishchith@gmail.com>
1 parent 9315c2e commit afb733b

File tree

2 files changed

+137
-0
lines changed

2 files changed

+137
-0
lines changed

_data/sidebars/home_sidebar.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -142,6 +142,10 @@ entries:
142142
url: /sirmordred/container.html
143143
output: web
144144

145+
- title: Executing Micro-Mordred via Docker-Compose
146+
url: /sirmordred/micro-mordred.html
147+
output: web
148+
145149
- title: The projects file
146150
url: /sirmordred/projects.html
147151
output: web

sirmordred/micro-mordred.md

Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
### Micro-mordred via Docker-Compose
2+
3+
### What is Mordred?
4+
5+
- Mordred is the tool used to orchestrate the execution of the GrimoireLab platform, via a configuration file. We can find more details about the sections in the configuration file [here]([https://github.com/chaoss/grimoirelab-sirmordred#general-sections](https://github.com/chaoss/grimoirelab-sirmordred#general-sections)).
6+
7+
### What is Micro-Mordred?
8+
9+
- Micro-Mordred is a simplified version of Mordred which omits the use of its scheduler. Thus, Micro-Mordred allows to run single Mordred tasks (e.g. raw collection, enrichment) per execution. We can find the implementation of micro-mordred located in [/utils](https://github.com/chaoss/grimoirelab-sirmordred/tree/master/utils/micro.py) directory and it can be executed via command line.
10+
11+
12+
- In this tutorial, we'll try to execute micro-mordred with the help of docker-compose. `Docker-Compose` is a tool for defining and running multi-container Docker applications. As our application in this case (`micro-mordred`), requires instances of ElasticSearch, Kibiter ( a soft-fork of Kibana ) and MariaDB. We'll use `docker-compose` to handle the dependent instances.
13+
14+
15+
### Steps for execution
16+
17+
1. We'll use the following docker-compose configuration to instantiate the required components i.e ElasticSearch, Kibiter and MariaDB. Note that we can omit the `mariadb` section in case you have MySQL/MariaDB already installed in our system. We'll name the following configuration as `docker-config.yml`.
18+
19+
```
20+
elasticsearch:
21+
restart: on-failure:5
22+
image: bitergia/elasticsearch:6.1.0-secured
23+
command: elasticsearch -Enetwork.bind_host=0.0.0.0 -Ehttp.max_content_length=2000mb
24+
environment:
25+
- ES_JAVA_OPTS=-Xms2g -Xmx2g
26+
ports:
27+
- 9200:9200
28+
29+
kibiter:
30+
restart: on-failure:5
31+
image: bitergia/kibiter:secured-v6.1.4-2
32+
environment:
33+
- PROJECT_NAME=Development
34+
- NODE_OPTIONS=--max-old-space-size=1000
35+
- ELASTICSEARCH_URL=https://elasticsearch:9200
36+
links:
37+
- elasticsearch
38+
ports:
39+
- 5601:5601
40+
41+
mariadb:
42+
restart: on-failure:5
43+
image: mariadb:10.0
44+
expose:
45+
- "3306"
46+
ports:
47+
- "3306:3306"
48+
environment:
49+
- MYSQL_ROOT_PASSWORD=
50+
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
51+
- MYSQL_DATABASE=test_sh
52+
command: --wait_timeout=2592000 --interactive_timeout=2592000 --max_connections=300
53+
log_driver: "json-file"
54+
log_opt:
55+
max-size: "100m"
56+
max-file: "3"
57+
```
58+
59+
You can now run the following command in order to start the execution of individual instances.
60+
61+
```
62+
$ docker-compose -f docker-config.yml up
63+
```
64+
65+
Once you see something similar to the below `log` on your console, it means that you've successfully instantiated the containers corresponding to the required components.
66+
67+
```
68+
elasticsearch_1 | Search Guard Admin v6
69+
elasticsearch_1 | Will connect to 0.0.0.0:9300 ... done
70+
elasticsearch_1 | [2019-05-30T09:38:20,113][ERROR][c.f.s.a.BackendRegistry ] Not yet initialized (you may need to run sgadmin)
71+
elasticsearch_1 | Elasticsearch Version: 6.1.0
72+
elasticsearch_1 | Search Guard Version: 6.1.0-21.0
73+
elasticsearch_1 | Connected as CN=kirk,OU=client,O=client,L=test,C=de
74+
elasticsearch_1 | Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
75+
elasticsearch_1 | Clustername: bitergia_elasticsearch
76+
elasticsearch_1 | Clusterstate: GREEN
77+
elasticsearch_1 | Number of nodes: 1
78+
elasticsearch_1 | Number of data nodes: 1
79+
80+
...
81+
elasticsearch_1 | Done with success
82+
elasticsearch_1 | $@
83+
84+
...
85+
kibiter_1 | {"type":"log","@timestamp":"2019-05-30T09:38:25Z","tags":["status","plugin:elasticsearch@6.1.4-1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Service Unavailable"}
86+
```
87+
88+
- **Note**: In case you face a memory related error, which might cause the elasticsearch instance not instantiating completely and lead the linked kibiter instance a `Request timeout`. In such a case, try adjusting the `ES_JAVA_OPTS` parameter in the *environment* attribute given in the `docker-config.yml` config file. for eg. ( -Xms1g -Xmx1g )
89+
90+
2. At this point, you should be able to access the *ElasticSearch* instance via `http://admin:admin@localhost:9200` and *Kibiter* instance via `http://admin:admin@localhost:5601` on the browser. (something like below)
91+
92+
93+
<div align="center">
94+
<img scr="https://i.imgur.com/Czunlpr.png">
95+
<br>
96+
<p><b>Browser: Kibiter Instance</b></p>
97+
</div>
98+
99+
3. As you can see on the `Kibiter Instance` above, it says `Couldn't find any Elasticsearch data. You'll need to index some data into Elasticsearch before you can create an index pattern`. Hence, in order to index some data, we'll now execute micro-mordred using the following command, which will call the `Raw` and `Enrich` tasks for the Git config section from the provided `setup.cfg` file.
100+
101+
```
102+
$ python3 micro.py --raw --enrich setup.cfg --backends git
103+
```
104+
105+
The above command requires two files:
106+
- `setup.cfg`: Contains section of configuration for different components and tools
107+
- `projects.json`: Contains a list of projects to analyze
108+
109+
Read more about the projects file [here](https://github.com/chaoss/grimoirelab-tutorial/blob/master/sirmordred/projects.md).
110+
111+
We'll (for the purpose of this tutorial) use the files provided in the `/utils` directory, but feel free to play around with the file and their configurations :)
112+
113+
- **Note**: In case the process fails to index the data to the ElasticSearch, check the `.perceval` folder in the home directory; which in this case may contain the same repositories as mentioned in the `projects.json` file. We can proceed after removing the repositories using the following command.
114+
115+
```
116+
$ rm -rf .perceval/repositories/...
117+
```
118+
119+
4. Now, we can create the index pattern and after its successful creation we can analyze the data as per fields. Then, we execute the `panels` task to load the corresponding `sigils panels` to Kibiter instance using the following command.
120+
121+
```
122+
$ python3 micro.py --panels --cfg setup.cfg
123+
```
124+
125+
On successful execution of the above command, we can manage to produce some dashboard similar to the one shown below.
126+
127+
<div align="center">
128+
<img scr="https://i.imgur.com/Of09Voi.png">
129+
<br>
130+
<p><b>Dashboard - Git: Areas of Code </b></p>
131+
</div>
132+
133+
- Hence, we have successfully executed micro-mordred with the help of docker-compose.

0 commit comments

Comments
 (0)