首页 > 其他 > 详细

【爬虫】随机获取UA

时间:2019-02-27 17:14:55      阅读:327      评论:0      收藏:0      [点我收藏+]

使用模块  fake-useragent

https://github.com/hellysmile/fake-useragent

 

1.安装模块

2.配置

# settings.py

‘‘‘下载器中间件‘‘‘
DOWNLOADER_MIDDLEWARES = {
   Lagou.middlewares.RandomUserAgentMiddleware: 543,
    scrapy.downloadermiddlewares.useragent.UserAgentMiddleware: None,
}

‘‘‘UA的类型‘‘‘
RANDOM_UA_TYPE = "random"
# middlewares.py
‘‘‘
模仿middlewares的UserAgentMiddleware写的类‘‘‘ class RandomUserAgentMiddleware(object): """This middleware allows spiders to override the user_agent""" def __init__(self, crawler): # 实例化UserAgent(),从配置文件读取ua的类型 super().__init__() self.ua = UserAgent() self.ua_type = crawler.settings.get("RANDOM_UA_TYPE","random") @classmethod def from_crawler(cls, crawler): return cls(crawler) # def spider_opened(self, spider): # self.user_agent = getattr(spider, ‘user_agent‘, self.user_agent) def process_request(self, request, spider): def get_ua(): # 通过反射获取随机UA random_ua = getattr(self.ua,self.ua_type) return random_ua request.headers.setdefault("User-Agent", get_ua())

 

【爬虫】随机获取UA

原文:https://www.cnblogs.com/st-st/p/10444764.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!