首页 > Web开发 > 详细

爬虫2 urllib3 爬取30张百度图片

时间:2019-01-12 20:14:30      阅读:435      评论:0      收藏:0      [点我收藏+]
import urllib3
import re
# 下载百度首页页面的所有图片
# 1.    找到目标数据
# page_url = ‘http://image.baidu.com/search/index?tn=baiduimage&ct=201326592&lm=-1&cl=2&ie=gb18030&word=%CD%BC%C6%AC&fr=ala&ala=1&alatpl=others&pos=0‘
# http = urllib3.PoolManager()
# res = http.request(‘get‘,page_url)
# print(res.data.decode(‘utf-8‘))

# Ajax的
ajax_url = http://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&ct=201326592&is=&fp=result&queryWord=%E5%9B%BE%E7%89%87&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=&z=&ic=&hd=&latest=&copyright=&word=%E5%9B%BE%E7%89%87&s=&se=&tab=&width=&height=&face=&istype=&qc=&nc=&fr=&expermode=&force=&pn=30&rn=30&gsm=1e&1546957772498=
http = urllib3.PoolManager()
res = http.request(get,ajax_url)
# print(res.data.decode())
img_urls = re.findall(r"thumbURL":"(.*?),,res.data.decode())
# print(img_urls)
# print(len(img_url))
headers = {
    Referer:https://www.baidu.com/s?ie=utf-8&wd=%E5%9B%BE%E7%89%87
}
for i , img_url in enumerate(img_urls):
    # print(img_url)
    img = http.request(get,img_url,headers=headers)

 

爬虫2 urllib3 爬取30张百度图片

原文:https://www.cnblogs.com/cxhzy/p/10260839.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!