首页 > 编程语言 > 详细

【Python爬虫】Urllib库的基本使用

时间:2020-10-18 12:26:32      阅读:42      评论:0      收藏:0      [点我收藏+]

1、如何发送get请求

import urllib.request

response = urllib.request.urlopen(http://www.baidu.com)
print(response.read().decode(utf-8))

2、如何发送post请求

import urllib.parse
import urllib.request

data = bytes(urllib.parse.urlencode({word: hello}), encoding=utf8)
print(data)
response = urllib.request.urlopen(http://httpbin.org/post, data=data)
print(response.read())

3、timeout参数的使用

在某些网络情况不好或者服务器端异常的情况会出现请求慢的情况,或者请求异常,所以这个时候我们需要给
请求设置一个超时时间,而不是让程序一直在等待结果。例子如下:

import urllib.request

response = urllib.request.urlopen(http://httpbin.org/get, timeout=12)
print(response.read())

4、处理异常

import socket
import urllib.request
import urllib.error

try:
    response = urllib.request.urlopen(http://httpbin.org/get, timeout=0.1)
except urllib.error.URLError as e:
    if isinstance(e.reason, socket.timeout):
        print(TIME OUT)
from urllib import request,error

try:
    response = request.urlopen("http://pythonsite.com/1111.html")
except error.URLError as e:
    print(e.reason)

 

5、给请求添加头部信息

from urllib import request, parse

url = http://httpbin.org/post
headers = {
    User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT),
    Host: httpbin.org
}
dict = {
    name: zhaofan
}
data = bytes(parse.urlencode(dict), encoding=utf8)
req = request.Request(url=url, data=data, headers=headers, method=POST)
response = request.urlopen(req)
print(response.read().decode(utf-8))

6、设置代理爬取数据

import urllib.request

proxy_handler = urllib.request.ProxyHandler({
    http: http://127.0.0.1:9743,
    https: https://127.0.0.1:9743
})
opener = urllib.request.build_opener(proxy_handler)
response = opener.open(http://httpbin.org/get)
print(response.read())

7、

cookie,HTTPCookiProcessor

cookie中保存中我们常见的登录信息,有时候爬取网站需要携带cookie信息访问,这里用到了http.cookijar,用于获取cookie以及存储cookie

import http.cookiejar, urllib.request
cookie = http.cookiejar.CookieJar()
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open(http://www.baidu.com)
for item in cookie:
    print(item.name+"="+item.value)

同时cookie可以写入到文件中保存,有两种方式http.cookiejar.MozillaCookieJar和http.cookiejar.LWPCookieJar(),当然你自己用哪种方式都可以

具体代码例子如下:
http.cookiejar.MozillaCookieJar()方式

import http.cookiejar, urllib.request
filename = "cookie.txt"
cookie = http.cookiejar.MozillaCookieJar(filename)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open(http://www.baidu.com)
cookie.save(ignore_discard=True, ignore_expires=True)

http.cookiejar.LWPCookieJar()方式

import http.cookiejar, urllib.request
filename = cookie.txt
cookie = http.cookiejar.LWPCookieJar(filename)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open(http://www.baidu.com)
cookie.save(ignore_discard=True, ignore_expires=True)

同样的如果想要通过获取文件中的cookie获取的话可以通过load方式,当然用哪种方式写入的,就用哪种方式读取。

 
import http.cookiejar, urllib.request
cookie = http.cookiejar.LWPCookieJar()
cookie.load(cookie.txt, ignore_discard=True, ignore_expires=True)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open(http://www.baidu.com)
print(response.read().decode(utf-8))

urlencode
这个方法可以将字典转换为url参数

from urllib.parse import urlencode

params = {
    "name":"zhaofan",
    "age":23,
}
base_url = "http://www.baidu.com?"

url = base_url+urlencode(params)
print(url)

 

【Python爬虫】Urllib库的基本使用

原文:https://www.cnblogs.com/sheep9527/p/13833942.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!