1.从新闻url获取新闻详情: 字典,anews
import requests import re from bs4 import BeautifulSoup import time import random url=‘http://news.gzcc.cn/html/xiaoyuanxinwen/‘ res=requests.get(url) res.encoding=‘utf-8‘ soup=BeautifulSoup(res.text,‘html.parser‘) soup.select(‘.news-list‘)[0].find_all(‘a‘)
2.从列表页的url获取新闻url:列表append(字典) alist
for i in soup.select(‘.news-list‘)[0].find_all(‘a‘): print(i.select(‘.news-list-title‘)) print(i[‘href‘]) def newsinfo(url): alist=[] page_res=requests.get(url) page_res.encoding=‘utf-8‘ soup1=BeautifulSoup(page_res.text,‘html.parser‘) li=soup1.select(‘.news-list‘)[0].find_all(‘a‘) for j in li: dictionary={} dictionary[‘title‘]=j.select(‘.news-list-title‘)[0].text dictionary[‘description‘]=j.select(‘.news-list-description‘)[0].text dictionary[‘date‘]=j.select(‘.news-list-info‘)[0].select(‘span‘)[0].text dictionary[‘publisher‘]=j.select(‘.news-list-info‘)[0].select(‘span‘)[1].text alist.append(dictionary) return alist
3.生成所页列表页的url并获取全部新闻 :列表extend(列表) allnews
*每个同学爬学号尾数开始的10个列表页
4.设置合理的爬取间隔
import time
import random
time.sleep(random.random()*3)
import time import random allnews=[] for h in range(30,40): a=‘http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html‘.format(h) alist=newsinfo(a) time.sleep(random.random()*3) allnews.extend(alist)
5.用pandas做简单的数据处理并保存
保存到csv或excel文件
newsdf.to_csv(r‘F:\duym\爬虫\gzccnews.csv‘)
import pandas as pd pd.DataFrame(data=allnews).to_csv(‘news.csv‘,encoding=‘utf_8_sig‘)
保存到数据库
import sqlite3
with sqlite3.connect(‘gzccnewsdb.sqlite‘) as db:
newsdf.to_sql(‘gzccnewsdb‘,db)
import sqlite3 with sqlite3.connect(‘gzccnewsdb.xlsx‘) as db: pd.DataFrame(data=allnews).to_sql(‘gzccnewsdb‘,db)
原文:https://www.cnblogs.com/pybblog/p/10672034.html