首页 > 其他 > 详细

scrapy框架的日志等级和请求传参

时间:2019-05-07 18:25:13      阅读:92      评论:0      收藏:0      [点我收藏+]

Scrapy的日志等级

  

- 在使用scrapy crawl spiderFileName运行程序时,在终端里打印输出的就是scrapy的日志信息。

  - 日志信息的种类:

        ERROR : 一般错误

        WARNING : 警告

        INFO : 一般的信息

        DEBUG : 调试信息

       

  - 设置日志信息指定输出:

    在settings.py配置文件中,加入

                    LOG_LEVEL = ‘指定日志信息种类’即可。

                    LOG_FILE = log.txt则表示将日志信息写入到指定文件中进行存储。

请求传参

  - 在某些情况下,我们爬取的数据不在同一个页面中,例如,我们爬取一个电影网站,电影的名称,评分在一级页面,而要爬取的其他电影详情在其二级子页面中。这时我们就需要用到请求传参。

  - 案例展示:爬取www.id97.com电影网,将一级页面中的电影名称,类型,评分一级二级页面中的上映时间,导演,片长进行爬取。

爬虫文件:

import scrapy
from moviePro.items import MovieproItem

class MovieSpider(scrapy.Spider):
    name = movie
    allowed_domains = [www.id97.com]
    start_urls = [http://www.id97.com/]

    def parse(self, response):
        div_list = response.xpath(//div[@class="col-xs-1-5 movie-item"])

        for div in div_list:
            item = MovieproItem()
            item[name] = div.xpath(.//h1/a/text()).extract_first()
            item[score] = div.xpath(.//h1/em/text()).extract_first()
            #xpath(string(.))表示提取当前节点下所有子节点中的数据值(.)表示当前节点
            item[kind] = div.xpath(.//div[@class="otherinfo"]).xpath(string(.)).extract_first()
            item[detail_url] = div.xpath(./div/a/@href).extract_first()
            #请求二级详情页面,解析二级页面中的相应内容,通过meta参数进行Request的数据传递
            yield scrapy.Request(url=item[detail_url],callback=self.parse_detail,meta={item:item})

    def parse_detail(self,response):
        #通过response获取item
        item = response.meta[item]
        item[actor] = response.xpath(//div[@class="row"]//table/tr[1]/a/text()).extract_first()
        item[time] = response.xpath(//div[@class="row"]//table/tr[7]/td[2]/text()).extract_first()
        item[long] = response.xpath(//div[@class="row"]//table/tr[8]/td[2]/text()).extract_first()
        #提交item到管道
        yield item
技术分享图片

items文件:

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class MovieproItem(scrapy.Item):
    # define the fields for your item here like:
    name = scrapy.Field()
    score = scrapy.Field()
    time = scrapy.Field()
    long = scrapy.Field()
    actor = scrapy.Field()
    kind = scrapy.Field()
    detail_url = scrapy.Field()

管道文件:

# Define your item pipelines here
#
# Don‘t forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

import json
class MovieproPipeline(object):
    def __init__(self):
        self.fp = open(data.txt,w)
    def process_item(self, item, spider):
        dic = dict(item)
        print(dic)
        json.dump(dic,self.fp,ensure_ascii=False)
        return item
    def close_spider(self,spider):
        self.fp.close()

如何提高scrapy的爬取效率

增加并发:
    默认scrapy开启的并发线程为32个,可以适当进行增加。在settings配置文件中修改CONCURRENT_REQUESTS = 100值为100,并发设置成了为100。

降低日志级别:
    在运行scrapy时,会有大量日志信息的输出,为了减少CPU的使用率。可以设置log输出信息为INFO或者ERROR即可。在配置文件中编写:LOG_LEVEL = ‘INFO’

禁止cookie:
    如果不是真的需要cookie,则在scrapy爬取数据时可以进制cookie从而减少CPU的使用率,提升爬取效率。在配置文件中编写:COOKIES_ENABLED = False

禁止重试:
    对失败的HTTP进行重新请求(重试)会减慢爬取速度,因此可以禁止重试。在配置文件中编写:RETRY_ENABLED = False

减少下载超时:
    如果对一个非常慢的链接进行爬取,减少下载超时可以能让卡住的链接快速被放弃,从而提升效率。在配置文件中进行编写:DOWNLOAD_TIMEOUT = 10 超时时间为10s

测试案例:爬取校花网校花图片 www.521609.com

import scrapy
from xiaohua.items import XiaohuaItem

class XiahuaSpider(scrapy.Spider):

    name = xiaohua
    allowed_domains = [www.521609.com]
    start_urls = [http://www.521609.com/daxuemeinv/]

    pageNum = 1
    url = http://www.521609.com/daxuemeinv/list8%d.html

    def parse(self, response):
        li_list = response.xpath(//div[@class="index_img list_center"]/ul/li)
        for li in li_list:
            school = li.xpath(./a/img/@alt).extract_first()
            img_url = li.xpath(./a/img/@src).extract_first()

            item = XiaohuaItem()
            item[school] = school
            item[img_url] = http://www.521609.com + img_url

            yield item

        if self.pageNum < 10:
            self.pageNum += 1
            url = format(self.url % self.pageNum)
            #print(url)
            yield scrapy.Request(url=url,callback=self.parse)

 

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class XiaohuaItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    school=scrapy.Field()
    img_url=scrapy.Field()

 

# Define your item pipelines here
#
# Don‘t forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

import json
import os
import urllib.request
class XiaohuaPipeline(object):
    def __init__(self):
        self.fp = None

    def open_spider(self,spider):
        print(开始爬虫)
        self.fp = open(./xiaohua.txt,w)

    def download_img(self,item):
        url = item[img_url]
        fileName = item[school]+.jpg
        if not os.path.exists(./xiaohualib):
            os.mkdir(./xiaohualib)
        filepath = os.path.join(./xiaohualib,fileName)
        urllib.request.urlretrieve(url,filepath)
        print(fileName+"下载成功")

    def process_item(self, item, spider):
        obj = dict(item)
        json_str = json.dumps(obj,ensure_ascii=False)
        self.fp.write(json_str+\n)

        #下载图片
        self.download_img(item)
        return item

    def close_spider(self,spider):
        print(结束爬虫)
        self.fp.close()
技术分享图片

配置文件:

USER_AGENT = Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 100
COOKIES_ENABLED = False
LOG_LEVEL = ERROR
RETRY_ENABLED = False
DOWNLOAD_TIMEOUT = 3
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
DOWNLOAD_DELAY = 3

 

scrapy框架的日志等级和请求传参

原文:https://www.cnblogs.com/wanglan/p/10826999.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!