首页 > 编程语言 > 详细

Python爬虫入门

时间:2019-02-26 22:18:13      阅读:182      评论:0      收藏:0      [点我收藏+]
  • 爬虫:通过编写程序,模拟浏览器上网,然后让其去互联网上爬取数据的过程.
  • 爬虫的分类:

    • 通用爬虫:
    • 聚焦爬虫:
    • 增量式:
  • 反爬机制:

  • 反反爬策略:

  • robots.txt协议:遵从或者不遵从

 

requests模块代码编写的流程:

  • 指定url
  • 发起请求
  • 获取响应对象中的数据
  • 持久化存储、
# 利用python爬虫搜索
#1
url = https://www.sogou.com/
#2.
response = requests.get(url=url)
#3.
page_text = response.text
#4.
with open(./sogou.html,w,encoding=utf-8) as fp:
    fp.write(page_text)
#需求:爬取搜狗指定词条搜索后的页面数据
import requests
url = https://www.sogou.com/web
#封装参数
wd = input(enter a word:)
param = {
    query:wd
}
response = requests.get(url=url,params=param)

page_text = response.content
fileName = wd+.html
with open(fileName,wb) as fp:
    fp.write(page_text)
    print(over)
#爬取百度翻译结果
url = https://fanyi.baidu.com/sug
wd = input(enter a word:)
data = {
    kw:wd
}
response = requests.post(url=url,data=data)

print(response.json())

#response.text : 字符串
#.content : 二进制
#.json() : 对象
#爬取豆瓣电影分类排行榜 https://movie.douban.com/中的电影详情数据
url = https://movie.douban.com/j/chart/top_list
param = {
    "type": "5",
    "interval_id": "100:90",
    "action": ‘‘,
    "start": "60",
    "limit": "100",
}

movie_data = requests.get(url=url,params=param).json()

print(movie_data)
#需求:爬取国家药品监督管理总局中基于中华人民共和国化妆品生产许可证相关数据http://125.35.6.84:81/xk/
#反爬机制:UA检测  --> UA伪装
import requests
url = http://125.35.6.84:81/xk/itownet/portalAction.do?method=getXkzsList
headers = {
    User-Agent:Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36
}
id_list = []
for page in range(1,11):
    data = {
        "on": "true",
        "page": str(page),
        "pageSize": "15",
        "productName": "",
        "conditionType": "1",
        "applyname": "",
        "applysn": "",
    }
    json_data = requests.post(url=url,data=data,headers=headers).json()
    for dic in json_data[list]:
        id = dic[ID]
        id_list.append(id)
    
detail_url = http://125.35.6.84:81/xk/itownet/portalAction.do?method=getXkzsById
for id in id_list:
    detail_data = {
        id:id
    }
    detail_json = requests.post(url=detail_url,data=detail_data,headers=headers).json()
    print(detail_json)
#爬取照片
url = https://ss2.bdstatic.com/70cFvnSh_Q1YnxGkpoWK1HF6hhy/it/u=806201715,3137077445&fm=26&gp=0.jpg
img_data = requests.get(url=url,headers=headers).content
with open(./xiaohua.jpg,wb) as fp:
    fp.write(img_data)
import urllib
url = https://ss2.bdstatic.com/70cFvnSh_Q1YnxGkpoWK1HF6hhy/it/u=806201715,3137077445&fm=26&gp=0.jpg
urllib.request.urlretrieve(url=url,filename=./123.jpg)
import re
string = ‘‘‘fall in love with you
i love you very much
i love she
i love her‘‘‘

re.findall(^i.*,string,re.M)
#####################################################################
#匹配全部行
string1 = """细思极恐
你的队友在看书
你的敌人在磨刀
你的闺蜜在减肥
隔壁老王在练腰
"""
re.findall(.*,string1,re.S)
import requests
import re
import urllib
import os
url = https://www.qiushibaike.com/pic/page/%d/?s=5170552
# page = 1
headers = {
    User-Agent:Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36
}
if not os.path.exists(./qiutu):
    os.mkdir(./qiutu)
    
start_page = int(input(enter a start pageNum:))
end_page = int(input(enter a end pageNum:))

for page in range(start_page,end_page+1):
    new_url = format(url%page)
#     print(new_url)
    page_text = requests.get(url=new_url,headers=headers).text
    img_url_list = re.findall(<div class="thumb">.*?<img src="(.*?)" alt=.*?</div>,page_text,re.S)
    for img_url in img_url_list:
        img_url = https:+img_url
        imgName = img_url.split(/)[-1]
        imgPath = qiutu/+imgName
        urllib.request.urlretrieve(url=img_url,filename=imgPath)
        print(imgPath,下载成功!)
        
print(over!!!)
  • bs4解析: 1.pii install bs4 2.pip install lxml

  • 解析原理:

    • 1.将即将要进行解析的源码加载到bs对象
    • 2.调用bs对象中相关的方法或属性进行源码中的相关标签的定位
    • 3.将定位到的标签之间存在的文本或者属性值获取
import requests
from bs4 import BeautifulSoup
url = http://www.shicimingju.com/book/sanguoyanyi.html
headers = {
    User-Agent:Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36
}
page_text = requests.get(url=url,headers=headers).text

soup = BeautifulSoup(page_text,lxml)

a_list = soup.select(.book-mulu > ul > li > a)

fp = open(sanguo.txt,w,encoding=utf-8)
for a in a_list:
    title = a.string
    detail_url = http://www.shicimingju.com+a[href]
    detail_page_text = requests.get(url=detail_url,headers=headers).text
    
    soup = BeautifulSoup(detail_page_text,lxml)
    content = soup.find(div,class_=chapter_content).text
    
    fp.write(title+\n+content)
    print(title,下载完毕)
print(over)
fp.close()

 

Python爬虫入门

原文:https://www.cnblogs.com/songhuasheng/p/10440387.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!