首页 > Web开发 > 详细

爬取妹子网,重点是加入开头的'Referer':'http://www.mzitu.com/'

时间:2019-02-15 11:45:55      阅读:1136      评论:0      收藏:0      [点我收藏+]

爬取妹纸网,重点是加入

‘Referer‘:‘http://www.mzitu.com/‘
import  requests
import  re
import time
import random
headers = {User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.33 Safari/537.36
           ,Referer:http://www.mzitu.com/}

#headers = {‘User-Agent‘:‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.33 Safari/537.36‘}
session = requests.session()
session.keep_alive = False
for  a in range(1,10):
        url = https://www.mzitu.com/149482/+str(a)
        #url =‘https://www.mzitu.com/zipai/comment-page-‘+str(1)+‘/#comments‘
        data = requests.get(url,headers=headers).text
        #正则表达式
        #photo = r‘<.*?class="lazy".*?src=".*?".*?data-original="(.*?)".*?width=.*?>‘#清纯妹纸表达式
        photo = r<p>.*?<.*?src=.*?"(.*?)".*?alt=.*?width=.*?>
        #name = r‘<.*?class="comment-meta commentmetadata"><.*?(href=.*?)></div>‘
        photo_url=re.findall(photo,data,re.S)#正则,源代码,规则
        print(photo_url)
        #title_name = re.findall(name,data,re.S)
        time.sleep(2)
        for i,b in enumerate(photo_url):
              header = {Referer:url}
              response = requests.get(photo_url[i],headers = headers)
              print(response)
              print("正在下载第%s张  "%(a))
              with open({}.jpg.format(a),wb)  as f:
                  f.write(response.content)
              time.sleep(2)

加入 Referer:目的是请求时,告诉网站从哪里进来的

 

爬取妹子网,重点是加入开头的'Referer':'http://www.mzitu.com/'

原文:https://www.cnblogs.com/ilovelh/p/10382657.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!