Skip to content

Commit 3215984

Browse files
Ci listener (#4912)
* start elasticsearch container in CI * setup listener in CI * adjust Co-authored-by: Sophie <84560950+Sophie-Xie@users.noreply.github.com>
1 parent 758cf61 commit 3215984

11 files changed

+151
-33
lines changed

.github/workflows/nightly.yml

+12
Original file line numberDiff line numberDiff line change
@@ -220,6 +220,18 @@ jobs:
220220
OSS_DIR: nebula-graph/package/nightly
221221
container:
222222
image: vesoft/nebula-dev:${{ matrix.os }}
223+
services:
224+
elasticsearch:
225+
image: elasticsearch:7.17.7
226+
ports:
227+
- 9200:9200
228+
env:
229+
discovery.type: single-node
230+
options: >-
231+
--health-cmd "curl elasticsearch:9200"
232+
--health-interval 10s
233+
--health-timeout 5s
234+
--health-retries 5
223235
steps:
224236
- uses: webiny/action-post-run@2.0.1
225237
with:

.github/workflows/pull_request.yml

+12
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,18 @@ jobs:
7272
volumes:
7373
- /tmp/ccache/nebula/${{ matrix.os }}-${{ matrix.compiler }}:/tmp/ccache/nebula/${{ matrix.os }}-${{ matrix.compiler }}
7474
options: --cap-add=SYS_PTRACE
75+
services:
76+
elasticsearch:
77+
image: elasticsearch:7.17.7
78+
ports:
79+
- 9200:9200
80+
env:
81+
discovery.type: single-node
82+
options: >-
83+
--health-cmd "curl elasticsearch:9200"
84+
--health-interval 10s
85+
--health-timeout 5s
86+
--health-retries 5
7587
steps:
7688
- uses: webiny/action-post-run@2.0.1
7789
with:
+58
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
########## nebula-storaged-listener ###########
2+
########## basics ##########
3+
# Whether to run as a daemon process
4+
--daemonize=true
5+
# The file to host the process id
6+
--pid_file=pids_listener/nebula-storaged.pid
7+
# Whether to use the configuration obtained from the configuration file
8+
--local_config=true
9+
10+
########## logging ##########
11+
# The directory to host logging files
12+
--log_dir=logs_listener
13+
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
14+
--minloglevel=0
15+
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
16+
--v=0
17+
# Maximum seconds to buffer the log messages
18+
--logbufsecs=0
19+
# Whether to redirect stdout and stderr to separate output files
20+
--redirect_stdout=true
21+
# Destination filename of stdout and stderr, which will also reside in log_dir.
22+
--stdout_log_file=storaged-stdout.log
23+
--stderr_log_file=storaged-stderr.log
24+
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
25+
--stderrthreshold=2
26+
# Wether logging files' name contain timestamp.
27+
--timestamp_in_logfile_name=true
28+
29+
########## networking ##########
30+
# Meta server address
31+
--meta_server_addrs=127.0.0.1:9559
32+
# Local ip
33+
--local_ip=127.0.0.1
34+
# Storage daemon listening port
35+
--port=9789
36+
# HTTP service ip
37+
--ws_ip=127.0.0.1
38+
# HTTP service port
39+
--ws_http_port=19789
40+
# heartbeat with meta service
41+
--heartbeat_interval_secs=10
42+
43+
########## storage ##########
44+
# Listener wal directory. only one path is allowed.
45+
--listener_path=data/listener
46+
# This parameter can be ignored for compatibility. let's fill A default value of "data"
47+
--data_path=data
48+
# The type of part manager, [memory | meta]
49+
--part_man_type=memory
50+
# The default reserved bytes for one batch operation
51+
--rocksdb_batch_size=4096
52+
# The default block cache size used in BlockBasedTable.
53+
# The unit is MB.
54+
--rocksdb_block_cache=4
55+
# The type of storage engine, `rocksdb', `memory', etc.
56+
--engine_type=rocksdb
57+
# The type of part, `simple', `consensus'...
58+
--part_type=simple

tests/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -175,15 +175,15 @@ e.g.
175175
```gherkin
176176
Feature: Nebula service termination test
177177
Scenario: Basic termination test
178-
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged
178+
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged and 0 listener
179179
When the cluster was terminated
180180
Then no service should still running after 4s
181181
```
182182

183183
```gherkin
184184
Feature: Example
185185
Scenario: test with disable authorize
186-
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged:
186+
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged and 0 listener:
187187
"""
188188
graphd:enable_authorize=false
189189
"""
@@ -201,7 +201,7 @@ Feature: Example
201201
Then the execution should be successful
202202
203203
Scenario: test with enable authorize
204-
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged:
204+
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged and 0 listener:
205205
"""
206206
graphd:enable_authorize=true
207207
"""

tests/common/nebula_service.py

+43-18
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,12 @@ def __init__(self, name, ports, suffix_index=0, params=None, is_standalone=False
4141
self.tcp_port, self.tcp_internal_port, self.http_port, self.https_port = ports[0:4]
4242
self.meta_port, self.meta_tcp_internal_port, self.meta_http_port, self.meta_https_port = ports[4:8]
4343
self.storage_port, self.storage_tcp_internal_port, self.storage_http_port, self.storage_https_port = ports[8:12]
44+
if name == "listener":
45+
self.binary_name = "storaged"
46+
self.conf_name = "storaged-listener"
47+
else:
48+
self.binary_name = name
49+
self.conf_name = name
4450
self.suffix_index = suffix_index
4551
self.params = params
4652
self.host = '127.0.0.1'
@@ -56,14 +62,14 @@ def _format_nebula_command(self):
5662
if self.is_sa == False:
5763
process_params = {
5864
'log_dir': 'logs{}'.format(self.suffix_index),
59-
'pid_file': 'pids{}/nebula-{}.pid'.format(self.suffix_index, self.name),
65+
'pid_file': 'pids{}/nebula-{}.pid'.format(self.suffix_index, self.binary_name),
6066
'port': self.tcp_port,
6167
'ws_http_port': self.http_port,
6268
}
6369
else:
6470
process_params = {
6571
'log_dir': 'logs{}'.format(self.suffix_index),
66-
'pid_file': 'pids{}/nebula-{}.pid'.format(self.suffix_index, self.name),
72+
'pid_file': 'pids{}/nebula-{}.pid'.format(self.suffix_index, self.binary_name),
6773
'port': self.tcp_port,
6874
'ws_http_port': self.http_port,
6975
'meta_port': self.meta_port,
@@ -72,16 +78,16 @@ def _format_nebula_command(self):
7278
'ws_storage_http_port': self.storage_http_port,
7379
}
7480
# data path
75-
if self.name.upper() != 'GRAPHD':
81+
if self.binary_name.upper() != 'GRAPHD':
7682
process_params['data_path'] = 'data{}/{}'.format(
77-
self.suffix_index, self.name
83+
self.suffix_index, self.binary_name
7884
)
7985

8086
process_params.update(self.params)
8187
cmd = [
82-
'bin/nebula-{}'.format(self.name),
88+
'bin/nebula-{}'.format(self.binary_name),
8389
'--flagfile',
84-
'conf/nebula-{}.conf'.format(self.name),
90+
'conf/nebula-{}.conf'.format(self.conf_name),
8591
] + ['--{}={}'.format(key, value) for key, value in process_params.items()]
8692

8793
return " ".join(cmd)
@@ -126,35 +132,39 @@ def __init__(
126132
metad_num=1,
127133
storaged_num=1,
128134
graphd_num=1,
135+
listener_num=1,
129136
ca_signed=False,
130137
debug_log=True,
131138
use_standalone=False,
132139
query_concurrently=False,
133140
**kwargs,
134141
):
135-
assert graphd_num > 0 and metad_num > 0 and storaged_num > 0
142+
assert graphd_num > 0 and metad_num > 0 and storaged_num > 0 and listener_num >= 0
136143
self.build_dir = str(build_dir)
137144
self.src_dir = str(src_dir)
138145
self.work_dir = os.path.join(
139146
self.build_dir,
140147
'server_' + time.strftime('%Y-%m-%dT%H-%M-%S', time.localtime()),
141148
)
142149
self.pids = {}
143-
self.metad_num, self.storaged_num, self.graphd_num = (
150+
self.metad_num, self.storaged_num, self.graphd_num, self.listener_num = (
144151
metad_num,
145152
storaged_num,
146153
graphd_num,
154+
listener_num,
147155
)
148-
self.metad_processes, self.storaged_processes, self.graphd_processes = (
156+
self.metad_processes, self.storaged_processes, self.graphd_processes, self.listener_processes = (
157+
[],
149158
[],
150159
[],
151160
[],
152161
)
153162
self.all_processes = []
154163
self.all_ports = []
155-
self.metad_param, self.storaged_param, self.graphd_param = {}, {}, {}
164+
self.metad_param, self.storaged_param, self.graphd_param, self.listener_param = {}, {}, {}, {}
156165
self.storaged_port = 0
157166
self.graphd_port = 0
167+
self.listener_port = 0
158168
self.ca_signed = ca_signed
159169
self.is_graph_ssl = (
160170
kwargs.get("enable_graph_ssl", "false").upper() == "TRUE"
@@ -175,7 +185,6 @@ def __init__(
175185
self._make_sa_params(**kwargs)
176186
self.init_standalone()
177187

178-
179188
def init_standalone(self):
180189
process_count = self.metad_num + self.storaged_num + self.graphd_num
181190
ports_count = process_count * self.ports_per_process
@@ -184,7 +193,7 @@ def init_standalone(self):
184193
index = 0
185194
standalone = NebulaProcess(
186195
"standalone",
187-
self.all_ports[index : index + ports_count ],
196+
self.all_ports[index: index + ports_count],
188197
index,
189198
self.graphd_param,
190199
is_standalone=True
@@ -205,15 +214,15 @@ def init_standalone(self):
205214
p.update_meta_server_addrs(meta_server_addrs)
206215

207216
def init_process(self):
208-
process_count = self.metad_num + self.storaged_num + self.graphd_num
217+
process_count = self.metad_num + self.storaged_num + self.graphd_num + self.listener_num
209218
ports_count = process_count * self.ports_per_process
210219
self.all_ports = self._find_free_port(ports_count)
211220
index = 0
212221

213222
for suffix_index in range(self.metad_num):
214223
metad = NebulaProcess(
215224
"metad",
216-
self.all_ports[index : index + self.ports_per_process],
225+
self.all_ports[index: index + self.ports_per_process],
217226
suffix_index,
218227
self.metad_param,
219228
)
@@ -223,7 +232,7 @@ def init_process(self):
223232
for suffix_index in range(self.storaged_num):
224233
storaged = NebulaProcess(
225234
"storaged",
226-
self.all_ports[index : index + self.ports_per_process],
235+
self.all_ports[index: index + self.ports_per_process],
227236
suffix_index,
228237
self.storaged_param,
229238
)
@@ -235,7 +244,7 @@ def init_process(self):
235244
for suffix_index in range(self.graphd_num):
236245
graphd = NebulaProcess(
237246
"graphd",
238-
self.all_ports[index : index + self.ports_per_process],
247+
self.all_ports[index: index + self.ports_per_process],
239248
suffix_index,
240249
self.graphd_param,
241250
)
@@ -244,8 +253,20 @@ def init_process(self):
244253
if suffix_index == 0:
245254
self.graphd_port = self.all_ports[0]
246255

256+
for suffix_index in range(self.storaged_num, self.storaged_num+self.listener_num):
257+
listener = NebulaProcess(
258+
"listener",
259+
self.all_ports[index: index + self.ports_per_process],
260+
suffix_index,
261+
self.listener_param
262+
)
263+
self.listener_processes.append(listener)
264+
index += self.ports_per_process
265+
if suffix_index == 0:
266+
self.listener_port = self.all_ports[0]
267+
247268
self.all_processes = (
248-
self.metad_processes + self.storaged_processes + self.graphd_processes
269+
self.metad_processes + self.storaged_processes + self.graphd_processes + self.listener_processes
249270
)
250271
# update meta address
251272
meta_server_addrs = ','.join(
@@ -301,11 +322,14 @@ def _make_params(self, **kwargs):
301322
self.storaged_param['raft_heartbeat_interval_secs'] = '30'
302323
self.storaged_param['skip_wait_in_rate_limiter'] = 'true'
303324

325+
# params for listener only
326+
self.listener_param = copy.copy(self.storaged_param)
327+
304328
# params for meta only
305329
self.metad_param = copy.copy(_params)
306330
self.metad_param["default_parts_num"] = 1
307331

308-
for p in [self.metad_param, self.storaged_param, self.graphd_param]:
332+
for p in [self.metad_param, self.storaged_param, self.graphd_param, self.listener_param]:
309333
p.update(kwargs)
310334

311335
def _make_sa_params(self, **kwargs):
@@ -358,6 +382,7 @@ def _copy_nebula_conf(self):
358382
conf_path + '{}.conf.default'.format(item),
359383
self.work_dir + '/conf/{}.conf'.format(item),
360384
)
385+
shutil.copy(conf_path+'nebula-storaged-listener.conf.default', self.work_dir+'/conf/nebula-storaged-listener.conf')
361386

362387
resources_dir = self.work_dir + '/share/resources/'
363388
os.makedirs(resources_dir)

tests/nebula-test-run.py

+1
Original file line numberDiff line numberDiff line change
@@ -241,6 +241,7 @@ def stop_nebula(nb, configs=None):
241241
NEBULA_HOME,
242242
graphd_num=graphd_inst,
243243
storaged_num=1,
244+
listener_num=1,
244245
debug_log=opt_is(configs.debug, "true"),
245246
ca_signed=opt_is(configs.ca_signed, "true"),
246247
enable_ssl=configs.enable_ssl,

tests/tck/cluster/Example.feature

+3-3
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
Feature: Example
66

77
Scenario: test with disable authorize
8-
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged:
8+
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged and 0 listener:
99
"""
1010
graphd:enable_authorize=false
1111
"""
@@ -23,7 +23,7 @@ Feature: Example
2323
Then an PermissionError should be raised at runtime: No permission to grant/revoke god user.
2424

2525
Scenario: test with enable authorize
26-
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged:
26+
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged and 0 listener:
2727
"""
2828
graphd:enable_authorize=true
2929
"""
@@ -41,7 +41,7 @@ Feature: Example
4141
Then an PermissionError should be raised at runtime: No permission to grant/revoke god user.
4242

4343
Scenario: test with auth type is cloud
44-
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged:
44+
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged and 0 listener:
4545
"""
4646
graphd:auth_type=cloud
4747
"""

tests/tck/cluster/terminate.feature

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,6 @@ Feature: Nebula service termination test
55

66
# All nebula services should exit as expected after termination
77
Scenario: Basic termination test
8-
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged
8+
Given a nebulacluster with 1 graphd and 1 metad and 1 storaged and 0 listener
99
When the cluster was terminated
1010
Then no service should still running after 4s

0 commit comments

Comments
 (0)