首页 > 其他 > 详细

爬取全部的校园新闻

时间:2019-04-08 18:08:23      阅读:127      评论:0      收藏:0      [点我收藏+]

1.从新闻url获取新闻详情: 字典,anews

import requests
import re
from bs4 import BeautifulSoup
import time
import random
url=http://news.gzcc.cn/html/xiaoyuanxinwen/
res=requests.get(url)
res.encoding=utf-8
soup=BeautifulSoup(res.text,html.parser)
soup.select(.news-list)[0].find_all(a)

2.从列表页的url获取新闻url:列表append(字典) alist

for i in soup.select(.news-list)[0].find_all(a):
    print(i.select(.news-list-title))
    print(i[href])

def newsinfo(url):
    alist=[]
    
    page_res=requests.get(url)
    page_res.encoding=utf-8
    soup1=BeautifulSoup(page_res.text,html.parser)
    li=soup1.select(.news-list)[0].find_all(a)
    for j in li:
        dictionary={}
        dictionary[title]=j.select(.news-list-title)[0].text
        dictionary[description]=j.select(.news-list-description)[0].text
        dictionary[date]=j.select(.news-list-info)[0].select(span)[0].text
        dictionary[publisher]=j.select(.news-list-info)[0].select(span)[1].text
        alist.append(dictionary)
    return alist

 

技术分享图片

 

3.生成所页列表页的url并获取全部新闻 :列表extend(列表) allnews

  *每个同学爬学号尾数开始的10个列表页

4.设置合理的爬取间隔

import time

import random

time.sleep(random.random()*3)

import time
import random
allnews=[]
for h in range(30,40):
    a=http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html.format(h)
    alist=newsinfo(a)
    time.sleep(random.random()*3)
    allnews.extend(alist)

5.用pandas做简单的数据处理并保存

保存到csv或excel文件 

newsdf.to_csv(r‘F:\duym\爬虫\gzccnews.csv‘)

import pandas as pd
pd.DataFrame(data=allnews).to_csv(news.csv,encoding=utf_8_sig)

 技术分享图片

 

保存到数据库

import sqlite3
with sqlite3.connect(‘gzccnewsdb.sqlite‘) as db:
    newsdf.to_sql(‘gzccnewsdb‘,db)

import sqlite3
with sqlite3.connect(gzccnewsdb.xlsx) as db:
    pd.DataFrame(data=allnews).to_sql(gzccnewsdb,db)

 

爬取全部的校园新闻

原文:https://www.cnblogs.com/pybblog/p/10672034.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!