Requests and Responses

    Typically, Request objects are generated in the spiders and passacross the system until they reach the Downloader, which executes the requestand returns a object which travels back to the spider thatissued the request.

    Both Request and classes have subclasses which addfunctionality not required in the base classes. These are describedbelow in Request subclasses and.

    class scrapy.http.Request(url[, callback, method='GET', headers, body, cookies, meta, encoding='utf-8', priority=0, dont_filter=False, errback])

    A object represents an HTTP request, which is usuallygenerated in the Spider and executed by the Downloader, and thus generatinga Response.


    url

    A string containing the URL of this request. Keep in mind that thisattribute contains the escaped URL, so it can differ from the URL passed inthe constructor.

    This attribute is read-only. To change the URL of a Request usereplace().
    method

    A string representing the HTTP method in the request. This is guaranteed tobe uppercase. Example: "GET", "POST", "PUT", etc
    headers

    A dictionary-like object which contains the request headers.
    body

    A str that contains the request body.

    This attribute is read-only. To change the body of a Request usereplace().
    meta

    A dict that contains arbitrary metadata for this request. This dict isempty for new Requests, and is usually populated by different Scrapycomponents (extensions, middlewares, etc). So the data contained in thisdict depends on the extensions you have enabled.

    See Request.meta special keys for a list of special meta keysrecognized by Scrapy.

    This dict is when the request is cloned using thecopy() or replace() methods, and can also be accessed, in yourspider, from the response.meta attribute.
    copy()

    Return a new Request which is a copy of this Request. See also:.
    replace([url, method, headers, body, cookies, meta, encoding, dont_filter, callback, errback])

    Return a Request object with the same members, except for those membersgiven new values by whichever keyword arguments are specified. Theattribute is copied by default (unless a new valueis given in the meta argument). See alsoPassing additional data to callback functions.

    The callback of a request is a function that will be called when the responseof that request is downloaded. The callback function will be called with thedownloaded object as its first argument.

    Example:

    1. def parse_page1(self, response):
    2. return scrapy.Request("http://www.example.com/some_page.html",
    3. callback=self.parse_page2)
    4. def parse_page2(self, response):
    5. # this would log http://www.example.com/some_page.html
    6. self.logger.info("Visited %s", response.url)

    Here’s an example of how to pass an item using this mechanism, to populatedifferent fields from different pages:

    The Request.meta attribute can contain any arbitrary data, but thereare some special keys recognized by Scrapy and its built-in extensions.

    Those are:

    bindaddress

    The IP of the outgoing IP address to use for the performing the request.

    Here is the list of built-in subclasses. You can also subclassit to implement your own custom functionality.

    FormRequest objects

    The FormRequest class extends the base with functionality fordealing with HTML forms. It uses lxml.html forms to pre-populate formfields with form data from objects.

    class scrapy.http.FormRequest(url[, formdata, ])

    The class adds a new argument to the constructor. Theremaining arguments are the same as for the Request class and arenot documented here.



    The objects support the following class method inaddition to the standard Request methods:
    classmethod fromresponse(_response[, formname=None, formnumber=0, formdata=None, formxpath=None, clickdata=None, dont_click=False, ])

    Returns a new FormRequest object with its form field valuespre-populated with those found in the HTML <form> element containedin the given response. For an example see.

    The policy is to automatically simulate a click, by default, on any formcontrol that looks clickable, like a <input type="submit">. Eventhough this is quite convenient, and often the desired behaviour,sometimes it can cause problems which could be hard to debug. Forexample, when working with forms that are filled and/or submitted usingjavascript, the default from_response() behaviour may not be themost appropriate. To disable this behaviour you can set thedontclick argument to True. Also, if you want to change thecontrol clicked (instead of disabling it) you can also use theclickdata argument.



    The other parameters of this class method are passed directly to the constructor.


    0.10.3 新版功能: The formname parameter.



    0.17 新版功能: The formxpath parameter.

    Using FormRequest to send data via HTTP POST

    If you want to simulate a HTML Form POST in your spider and send a couple ofkey-value fields, you can return a object (from yourspider) like this:

    1. return [FormRequest(url="http://www.example.com/post/action",
    2. formdata={'name': 'John Doe', 'age': '27'},
    3. callback=self.after_post)]

    使用FormRequest.from_response()方法模拟用户登录

    通常网站通过 <input type="hidden"> 实现对某些表单字段(如数据或是登录界面中的认证令牌等)的预填充。使用Scrapy抓取网页时,如果想要预填充或重写像用户名、用户密码这些表单字段,可以使用 方法实现。下面是使用这种方法的爬虫例子:

    1. import scrapy
    2.  
    3. name = 'example.com'
    4. start_urls = ['http://www.example.com/users/login.php']
    5.  
    6. def parse(self, response):
    7. return scrapy.FormRequest.from_response(
    8. response,
    9. formdata={'username': 'john', 'password': 'secret'},
    10. callback=self.after_login
    11. )
    12.  
    13. def after_login(self, response):
    14. # check login succeed before going on
    15. if "authentication failed" in response.body:
    16. self.logger.error("Login failed")
    17. return
    18.  
    19. # continue scraping with authenticated session...
    class scrapy.http.Response(url[, status=200, headers, body, flags])

    A object represents an HTTP response, which is usuallydownloaded (by the Downloader) and fed to the Spiders for processing.


    url

    A string containing the URL of the response.

    This attribute is read-only. To change the URL of a Response use.
    status

    An integer representing the HTTP status of the response. Example: 200,404.
    headers

    A dictionary-like object which contains the response headers.

    A str containing the body of this Response. Keep in mind that Response.bodyis always a str. If you want the unicode version use (only available inTextResponse and subclasses).

    This attribute is read-only. To change the body of a Response use.
    request

    The object that generated this response. This attribute isassigned in the Scrapy engine, after the response and the request have passedthrough all Downloader Middlewares.In particular, this means that:

    - HTTP redirections will cause the original request (to the URL beforeredirection) to be assigned to the redirected response (with the finalURL after redirection).
    - Response.request.url doesn’t always equal Response.url
    - This attribute is only available in the spider code, and in the, but not inDownloader Middlewares (although you have the Request available there byother means) and handlers of the response_downloaded signal.
    meta

    A shortcut to the Request.meta attribute of the object (ie. self.request.meta).

    Unlike the Response.request attribute, the attribute is propagated along redirects and retries, so you will getthe original Request.meta sent from your spider.


    参见

    attribute

    flags

    A list that contains flags for this response. Flags are labels used fortagging Responses. For example: ‘cached’, ‘redirected‘, etc. Andthey’re shown on the string representation of the Response (strmethod) which is used by the engine for logging.
    copy()

    Returns a new Response which is a copy of this Response.
    replace([url, status, headers, body, request, flags, cls])

    Returns a Response object with the same members, except for those membersgiven new values by whichever keyword arguments are specified. Theattribute is copied by default.
    urljoin(url)

    Constructs an absolute url by combining the Response’s witha possible relative url.

    This is a wrapper over urlparse.urljoin, it’s merely an alias formaking this call:




    1. urlparse.urljoin(response.url, url)



    TextResponse objects

    class scrapy.http.TextResponse(url[, encoding[, ]])

    objects adds encoding capabilities to the baseResponse class, which is meant to be used only for binary data,such as images, sounds or any media file.

    objects support a new constructor argument, inaddition to the base Response objects. The remaining functionalityis the same as for the class and is not documented here.



    TextResponse objects support the following attributes in additionto the standard ones:
    encoding

    A string with the encoding of this response. The encoding is resolved bytrying the following mechanisms, in order:

    - the encoding passed in the constructor encoding argument
    - the encoding declared in the Content-Type HTTP header. If thisencoding is not valid (ie. unknown), it is ignored and the nextresolution mechanism is tried.
    - the encoding declared in the response body. The TextResponse classdoesn’t provide any special functionality for this. However, the and XmlResponse classes do.
    - the encoding inferred by looking at the response body. This is the morefragile method but also the last one tried.
    selector

    A Selector instance using the response astarget. The selector is lazily instantiated on first access.

    objects support the following methods in addition tothe standard Response ones:
    bodyas_unicode()

    Returns the body of the response as unicode. This is equivalent to:








    But not equivalent to:




    1. unicode(response.body)




    Since, in the latter case, you would be using you system default encoding(typically _ascii
    ) to convert the body to unicode, instead of the responseencoding.
    xpath(query)

    A shortcut to TextResponse.selector.xpath(query):




    1. response.xpath('//p')



    css(query)

    A shortcut to TextResponse.selector.css(query):







      class scrapy.http.HtmlResponse(url[, ])

      The class is a subclass of TextResponsewhich adds encoding auto-discovering support by looking into the HTML attribute. See TextResponse.encoding.

      XmlResponse objects

      class scrapy.http.XmlResponse(url[, ])

      The class is a subclass of whichadds encoding auto-discovering support by looking into the XML declarationline. See .