Selecting dynamically-loaded content

    When this happens, the recommended approach is to find the data source and extract the data from it.

    If you fail to do that, and you can nonetheless access the desired data through the from your web browser, see Pre-rendering JavaScript.

    To extract the desired data, you must first find its source location.

    If the data is in a non-text-based format, such as an image or a PDF document, use the of your web browser to find the corresponding request, and reproduce it.

    If your web browser lets you select the desired data as text, the data may be defined in embedded JavaScript code, or loaded from an external resource in a text-based format.

    In that case, you can use a tool like to find the URL of that resource.

    If the data turns out to come from the original URL itself, you must inspect the source code of the webpage to determine where the data is located.

    If the data comes from a different URL, you will need to .

    Inspecting the source code of a webpage

    Sometimes you need to inspect the source code of a webpage (not the ) to determine where some desired data is located.

    Use Scrapy’s fetch command to download the webpage contents as seen by Scrapy:

    If the desired data is in embedded JavaScript code within a element, see .

    If you cannot find the desired data, first make sure it’s not just Scrapy: download the webpage with an HTTP client like curl or and see if the information can be found in the response they get.

    If they get a response with the desired data, modify your Scrapy Request to match that of the other HTTP client. For example, try using the same user-agent string (USER_AGENT) or the same headers.

    If they also get a response without the desired data, you’ll need to take steps to make your request more similar to that of the web browser. See .

    Sometimes we need to reproduce a request the way our web browser performs it.

    Use the network tool of your web browser to see how your web browser performs the desired request, and try to reproduce that request with Scrapy.

    As all major browsers allow to export the requests in format, Scrapy incorporates the method from_curl() to generate an equivalent Request from a cURL command. To get more information visit request from curl inside the network tool section.

    Once you get the expected response, you can .

    You can reproduce any request with Scrapy. However, some times reproducing all necessary requests may not seem efficient in developer time. If that is your case, and crawling speed is not a major concern for you, you can alternatively consider JavaScript pre-rendering.

    If you get the expected response sometimes, but not always, the issue is probably not your request, but the target server. The target server might be buggy, overloaded, or some of your requests.

    Note that to translate a cURL command into a Scrapy request, you may use curl2scrapy.

    Handling different response formats

    Once you have a response with the desired data, how you extract the desired data from it depends on the type of response:

    • If the response is HTML or XML, use selectors as usual.

    • If the response is JSON, use to load the desired data from response.text:

        If the desired data is inside HTML or XML code embedded within JSON data, you can load that HTML or XML code into a Selector and then as usual:

      1. If the response is JavaScript, or HTML with a element containing the desired data, see Parsing JavaScript code.

      2. If the response is CSS, use a to extract the desired data from response.text.

      3. If the response is an image or another format based on images (e.g. PDF), read the response as bytes from response.body and use an OCR solution to extract the desired data as text.

        For example, you can use . To read a table from a PDF, tabula-py may be a better choice.

      4. If the response is SVG, or HTML with embedded SVG containing the desired data, you may be able to extract the desired data using , since SVG is based on XML.

        Otherwise, you might need to convert the SVG code into a raster image, and handle that raster image.

      If the desired data is hardcoded in JavaScript, you first need to get the JavaScript code:

      • If the JavaScript code is within a <script/> element of an HTML page, use to extract the text within that <script/> element.

      Once you have a string with the JavaScript code, you can extract the desired data from it:

      • You might be able to use a regular expression to extract the desired data in JSON format, which you can then parse with .

        For example, if the JavaScript code contains a separate line like var data = {"field": "value"}; you can extract that data as follows:

        1. >>> pattern = r'\bvar\s+data\s*=\s*(\{.*?\})\s*;\s*\n'
        2. >>> json_data = response.css('script::text').re_first(pattern)
        3. {'field': 'value'}
      • chompjs provides an API to parse JavaScript objects into a .

        For example, if the JavaScript code contains you can extract that data as follows:

      • Otherwise, use js2xml to convert the JavaScript code into an XML document that you can parse using .

        For example, if the JavaScript code contains var data = {field: "value"}; you can extract that data as follows:

        1. >>> import js2xml
        2. >>> import lxml.etree
        3. >>> from parsel import Selector
        4. >>> javascript = response.css('script::text').get()
        5. >>> xml = lxml.etree.tostring(js2xml.parse(javascript), encoding='unicode')
        6. >>> selector = Selector(text=xml)

      Pre-rendering JavaScript

      On webpages that fetch data from additional requests, reproducing those requests that contain the desired data is the preferred approach. The effort is often worth the result: structured, complete data with minimum parsing time and network transfer.

      However, sometimes it can be really hard to reproduce certain requests. Or you may need something that no request can give you, such as a screenshot of a webpage as seen in a web browser.

      In these cases use the JavaScript-rendering service, along with scrapy-splash for seamless integration.

      Splash returns as HTML the of a webpage, so that you can parse it with selectors. It provides great flexibility through or scripting.

      If you need something beyond what Splash offers, such as interacting with the DOM on-the-fly from Python code instead of using a previously-written script, or handling multiple web browser windows, you might need to instead.

      A headless browser is a special web browser that provides an API for automation. By installing the , it is possible to integrate asyncio-based libraries which handle headless browsers.

      One such library is playwright-python (an official Python port of ). The following is a simple snippet to illustrate its usage within a Scrapy spider:

      However, using playwright-python directly as in the above example circumvents most of the Scrapy components (middlewares, dupefilter, etc). We recommend using for a better integration.