Skip to content

Commit e1a4ca9

Browse files
authored
Merge pull request #351 from guillermoap/update_docs
Update cluster-setup docs
2 parents 0c8c58b + 037c3b4 commit e1a4ca9

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/source/topics/cluster-setup.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ a common modules and import settings from it in component's modules.
8888
from __future__ import absolute_import
8989
from .worker import *
9090

91-
CRAWLING_STRATEGY = '' # path to the crawling strategy class
91+
STRATEGY = '' # path to the crawling strategy class
9292
LOGGING_CONFIG='logging-sw.conf' # if needed
9393

9494
The logging can be configured according to https://docs.python.org/2/library/logging.config.html see the
@@ -127,7 +127,7 @@ First, let's start storage worker: ::
127127

128128
# start DB worker only for batch generation
129129
# use single instance for every 10 partitions
130-
$ python -m frontera.worker.db --config [db worker config module] --no-incoming --partitions 0,1
130+
$ python -m frontera.worker.db --config [db worker config module] --no-incoming --partitions 0 1
131131

132132

133133
# Optionally, start next one dedicated to spider log processing.
@@ -158,4 +158,4 @@ Finally, a single spider per spider feed partition: ::
158158
You should end up with N spider processes running. Also :setting:`SPIDER_PARTITION_ID` can be read from config file.
159159

160160
You're done, crawler should start crawling. Any component can be restarted any time, without major data loss. However,
161-
for pausing its enough to stop batch gen only.
161+
for pausing its enough to stop batch gen only.

0 commit comments

Comments
 (0)