.. versionadded:: 0.10
One of the most frequently required features when implementing scrapers is being able to store the scraped data properly and, quite often, that means generating a "export file" with the scraped data (commonly called "export feed") to be consumed by other systems.
Scrapy provides this functionality out of the box with the Feed Exports, which allows you to generate a feed with the scraped items, using multiple serialization formats and storage backends.
For serializing the scraped data, the feed exports use the :ref:`Item exporters <topics-exporters>` and these formats are supported out of the box:
But you can also extend the supported format through the :setting:`FEED_EXPORTERS` setting.
- :setting:`FEED_FORMAT`:
json
- Exporter used: :class:`~scrapy.contrib.exporter.JsonItemExporter`
- See :ref:`this warning <json-with-large-data>` if you're using JSON with large feeds
- :setting:`FEED_FORMAT`:
jsonlines
- Exporter used: :class:`~scrapy.contrib.exporter.JsonLinesItemExporter`
- :setting:`FEED_FORMAT`:
csv
- Exporter used: :class:`~scrapy.contrib.exporter.CsvItemExporter`
- :setting:`FEED_FORMAT`:
xml
- Exporter used: :class:`~scrapy.contrib.exporter.XmlItemExporter`
- :setting:`FEED_FORMAT`:
pickle
- Exporter used: :class:`~scrapy.contrib.exporter.PickleItemExporter`
- :setting:`FEED_FORMAT`:
marshal
- Exporter used: :class:`~scrapy.contrib.exporter.MarshalItemExporter`
When using the feed exports you define where to store the feed using a URI (through the :setting:`FEED_URI` setting). The feed exports supports multiple storage backend types which are defined by the URI scheme.
The storages backends supported out of the box are:
Some storage backends may be unavailable if the required external libraries are not available. For example, the S3 backend is only available if the boto library is installed.
The storage URI can also contain parameters that get replaced when the feed is being created. These parameters are:
%(time)s
- gets replaced by a timestamp when the feed is being created%(name)s
- gets replaced by the spider name
Any other named parameter gets replaced by the spider attribute of the same
name. For example, %(site_id)s
would get replaced by the spider.site_id
attribute the moment the feed is being created.
Here are some examples to illustrate:
- Store in FTP using one directory per spider:
ftp://user:password@ftp.example.com/scraping/feeds/%(name)s/%(time)s.json
- Store in S3 using one directory per spider:
s3://mybucket/scraping/feeds/%(name)s/%(time)s.json
The feeds are stored in the local filesystem.
- URI scheme:
file
- Example URI:
file:///tmp/export.csv
- Required external libraries: none
Note that for the local filesystem storage (only) you can omit the scheme if
you specify an absolute path like /tmp/export.csv
. This only works on Unix
systems though.
The feeds are stored in a FTP server.
- URI scheme:
ftp
- Example URI:
ftp://user:pass@ftp.example.com/path/to/export.csv
- Required external libraries: none
The feeds are stored on Amazon S3.
- URI scheme:
s3
- Example URIs:
s3://mybucket/path/to/export.csv
s3://aws_key:aws_secret@mybucket/path/to/export.csv
- Required external libraries: boto
The AWS credentials can be passed as user/password in the URI, or they can be passed through the following settings:
The feeds are written to the standard output of the Scrapy process.
- URI scheme:
stdout
- Example URI:
stdout:
- Required external libraries: none
These are the settings used for configuring the feed exports:
.. currentmodule:: scrapy.contrib.feedexport
.. setting:: FEED_URI
Default: None
The URI of the export feed. See :ref:`topics-feed-storage-backends` for supported URI schemes.
This setting is required for enabling the feed exports.
.. setting:: FEED_FORMAT
The serialization format to be used for the feed. See :ref:`topics-feed-format` for possible values.
.. setting:: FEED_STORE_EMPTY
Default: False
Whether to export empty feeds (ie. feeds with no items).
.. setting:: FEED_STORAGES
Default:: {}
A dict containing additional feed storage backends supported by your project. The keys are URI schemes and the values are paths to storage classes.
.. setting:: FEED_STORAGES_BASE
Default:
{ '': 'scrapy.contrib.feedexport.FileFeedStorage', 'file': 'scrapy.contrib.feedexport.FileFeedStorage', 'stdout': 'scrapy.contrib.feedexport.StdoutFeedStorage', 's3': 'scrapy.contrib.feedexport.S3FeedStorage', 'ftp': 'scrapy.contrib.feedexport.FTPFeedStorage', }
A dict containing the built-in feed storage backends supported by Scrapy.
.. setting:: FEED_EXPORTERS
Default:: {}
A dict containing additional exporters supported by your project. The keys are URI schemes and the values are paths to :ref:`Item exporter <topics-exporters>` classes.
.. setting:: FEED_EXPORTERS_BASE
Default:
FEED_EXPORTERS_BASE = { 'json': 'scrapy.contrib.exporter.JsonItemExporter', 'jsonlines': 'scrapy.contrib.exporter.JsonLinesItemExporter', 'csv': 'scrapy.contrib.exporter.CsvItemExporter', 'xml': 'scrapy.contrib.exporter.XmlItemExporter', 'marshal': 'scrapy.contrib.exporter.MarshalItemExporter', }
A dict containing the built-in feed exporters supported by Scrapy.