首页 > 编程语言 > 详细

python爬取千库网

时间:2020-09-14 13:59:45      阅读:149      评论:0      收藏:0      [点我收藏+]

url:https://i588ku.com/beijing/0-0-default-0-8-0-0-0-0-1/
有水印
技术分享图片

但是点进去就没了
技术分享图片

这里先来测试是否有反爬虫

import requests
from bs4 import BeautifulSoup
import os

html = requests.get(‘https://i588ku.com/beijing/0-0-default-0-8-0-0-0-0-1/‘)
print(html.text)

输出是404,添加个ua头就可以了

可以看到每个图片都在一个div class里面,比如fl marony-item bglist_5993476,是3个class但是最后一个编号不同就不取
技术分享图片

我们就可以获取里面的url

import requests
from bs4 import BeautifulSoup
import os

headers = {
    ‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36‘

}

html = requests.get(‘https://i588ku.com/beijing/0-0-default-0-8-0-0-0-0-1/‘,headers=headers)
soup = BeautifulSoup(html.text,‘lxml‘)
Urlimags = soup.select(‘div.fl.marony-item div a‘)
for Urlimag in Urlimags:
    print(Urlimag[‘href‘])

输出结果为

//i588ku.com/ycbeijing/5993476.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5991004.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5990729.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5991308.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5990409.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5989982.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5978978.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5993625.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5990728.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5951314.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5992353.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5993626.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5992302.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5820069.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5804406.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5960482.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5881533.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5986104.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5956726.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5986063.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5978787.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5954475.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5959200.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5973667.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5850381.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5898111.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5924657.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5975496.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5928655.html
//i588ku.com/comnew/vip/
//i588ku.com/ycbeijing/5963925.html
//i588ku.com/comnew/vip/

这个/vip是广告,过滤一下

for Urlimag in Urlimags:
    if ‘vip‘ in Urlimag[‘href‘]:
        continue
    print(‘http:‘+Urlimag[‘href‘])

然后用os写入本地

import requests
from bs4 import BeautifulSoup
import os

headers = {
    ‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36‘

}

html = requests.get(‘https://i588ku.com/beijing/0-0-default-0-8-0-0-0-0-1/‘,headers=headers)
soup = BeautifulSoup(html.text,‘lxml‘)
Urlimags = soup.select(‘div.fl.marony-item div a‘)
for Urlimag in Urlimags:
    if ‘vip‘ in Urlimag[‘href‘]:
        continue
    # print(‘http:‘+Urlimag[‘href‘])
    imgurl = requests.get(‘http:‘+Urlimag[‘href‘],headers=headers)
    imgsoup = BeautifulSoup(imgurl.text,‘lxml‘)
    imgdatas = imgsoup.select_one(‘.img-box img‘)
    title = imgdatas[‘alt‘]
    print(‘无水印:‘,‘https:‘+imgdatas[‘src‘])

    if not os.path.exists(‘千图网图片‘):
        os.mkdir(‘千图网图片‘)
    with open(‘千图网图片/{}.jpg‘.format(title),‘wb‘)as f:
        f.write(requests.get(‘https:‘+imgdatas[‘src‘],headers=headers).content)

然后我们要下载多页,先看看url规则
第一页:https://i588ku.com/beijing/0-0-default-0-8-0-0-0-0-1/
第二页:https://i588ku.com/beijing/0-0-default-0-8-0-0-0-0-2/

import requests
from bs4 import BeautifulSoup
import os

headers = {
    ‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36‘

}
for i in range(1,11):
    print(‘正在下载第{}页‘.format(i))
    html = requests.get(‘https://i588ku.com/beijing/0-0-default-0-8-0-0-0-0-{}/‘.format(i),headers=headers)
    soup = BeautifulSoup(html.text,‘lxml‘)
    Urlimags = soup.select(‘div.fl.marony-item div a‘)
    for Urlimag in Urlimags:
        if ‘vip‘ in Urlimag[‘href‘]:
            continue
        # print(‘http:‘+Urlimag[‘href‘])
        imgurl = requests.get(‘http:‘+Urlimag[‘href‘],headers=headers)
        imgsoup = BeautifulSoup(imgurl.text,‘lxml‘)
        imgdatas = imgsoup.select_one(‘.img-box img‘)
        title = imgdatas[‘alt‘]
        print(‘无水印:‘,‘https:‘+imgdatas[‘src‘])

        if not os.path.exists(‘千图网图片‘):
            os.mkdir(‘千图网图片‘)
        with open(‘千图网图片/{}.jpg‘.format(title),‘wb‘)as f:
            f.write(requests.get(‘https:‘+imgdatas[‘src‘],headers=headers).content)

技术分享图片

python爬取千库网

原文:https://www.cnblogs.com/yicunyiye/p/13666054.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!