Item Pipeline

    Each item pipeline component (sometimes referred as just “Item Pipeline”) is a Python class that implements a simple method. They receive an item and perform an action over it, also deciding if the item should continue through the pipeline or be dropped and no longer processed.

    Typical uses of item pipelines are:

    • cleansing HTML data

    • validating scraped data (checking that the items contain certain fields)

    • checking for duplicates (and dropping them)

    • storing the scraped item in a database

    Each item pipeline component is a Python class that must implement the following method:

    process_item(self, item, spider)

    This method is called for every item pipeline component.

    item is an item object, see .

    process_item() must either: return an , return a Deferred or raise a exception.

    Dropped items are no longer processed by further pipeline components.

      • item (item object) – the scraped item

      • spider ( object) – the spider which scraped the item

    Additionally, they may also implement the following methods:

    open_spider(self, spider)

    This method is called when the spider is opened.

    • Parameters

      spider (Spider object) – the spider which was opened

    close_spider(self, spider)

    This method is called when the spider is closed.

    • Parameters

      spider ( object) – the spider which was closed

    from_crawler(cls, crawler)

    • Parameters

      crawler (Crawler object) – crawler that uses this pipeline

    Let’s take a look at the following hypothetical pipeline that adjusts the attribute for those items that do not include VAT (price_excludes_vat attribute), and drops those items which don’t contain a price:

    The following pipeline stores all scraped items (from all spiders) into a single items.jl file, containing one item per line serialized in JSON format:

    1. import json
    2. from itemadapter import ItemAdapter
    3. class JsonWriterPipeline:
    4. def open_spider(self, spider):
    5. self.file = open('items.jl', 'w')
    6. def close_spider(self, spider):
    7. self.file.close()
    8. def process_item(self, item, spider):
    9. line = json.dumps(ItemAdapter(item).asdict()) + "\n"
    10. self.file.write(line)
    11. return item

    Note

    The purpose of JsonWriterPipeline is just to introduce how to write item pipelines. If you really want to store all scraped items into a JSON file you should use the .

    In this example we’ll write items to MongoDB using . MongoDB address and database name are specified in Scrapy settings; MongoDB collection is named after item class.

    The main point of this example is to show how to use from_crawler() method and how to clean up the resources properly.:

    This example demonstrates how to use in the process_item() method.

    This item pipeline makes a request to a locally-running instance of to render a screenshot of the item URL. After the request response is downloaded, the item pipeline saves the screenshot to a file and adds the filename to the item.

    1. from urllib.parse import quote
    2. import scrapy
    3. from itemadapter import ItemAdapter
    4. from scrapy.utils.defer import maybe_deferred_to_future
    5. class ScreenshotPipeline:
    6. """Pipeline that uses Splash to render screenshot of
    7. every Scrapy item."""
    8. SPLASH_URL = "http://localhost:8050/render.png?url={}"
    9. async def process_item(self, item, spider):
    10. adapter = ItemAdapter(item)
    11. encoded_item_url = quote(adapter["url"])
    12. screenshot_url = self.SPLASH_URL.format(encoded_item_url)
    13. request = scrapy.Request(screenshot_url)
    14. if response.status != 200:
    15. # Error happened, return item.
    16. return item
    17. # Save screenshot to file, filename will be hash of url.
    18. url = adapter["url"]
    19. url_hash = hashlib.md5(url.encode("utf8")).hexdigest()
    20. filename = f"{url_hash}.png"
    21. with open(filename, "wb") as f:
    22. f.write(response.body)
    23. # Store filename in item.
    24. adapter["screenshot_filename"] = filename
    25. return item

    A filter that looks for duplicate items, and drops those items that were already processed. Let’s say that our items have a unique id, but our spider returns multiples items with the same id:

    To activate an Item Pipeline component you must add its class to the ITEM_PIPELINES setting, like in the following example:

    1. ITEM_PIPELINES = {
    2. 'myproject.pipelines.PricePipeline': 300,