全站数据爬取的方法
CrawlSpider的使用:
创建一个工程
cd xxx
创建爬虫文件(Crawlspider):
链接提取器 LinkExtractor
规则解析器Rule
import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule class SunSpider(CrawlSpider): name = ‘sun‘ # allowed_domains = [‘www.xxx.com‘] start_urls = [‘http://wz.sun0769.com/political/index/politicsNewest?id=1&page=1‘] # 链接提取器:根据指定的规则(allow="正则")进行指定链接的提取 link = LinkExtractor(allow=r‘id=1&page=\d+‘) rules = ( # Rule规则解析器: Rule(link, callback=‘parse_item‘, follow=False), ) def parse_item(self, response): print(response)
原文:https://www.cnblogs.com/nanjo4373977/p/13024650.html