1 from urllib import request,parse 2 3 url = "https://www.baidu.com/s?wd=ip" 4 headers = {‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.80 Safari/537.36‘} 5 6 # 创建一个请求对象 7 req = request.Request(url=url,headers=headers) 8 9 # 创建一个handler 10 handler = request.ProxyHandler({"http":‘122.241.88.79:15872‘}) 11 12 # 创建一个opener携带handler 13 opener = request.build_opener(handler) 14 15 # 用opener发起请求 16 res = opener.open(req) 17 18 # 写入文件中 19 with open("ip.html",‘wb‘) as fp: 20 fp.write(res.read())
爬虫基础spider 之(五) --- 代理、异常、验证码、ai
原文:https://www.cnblogs.com/TMMM/p/11393262.html