首页 > 编程语言 > 详细

Python爬取CVPR2018论文

时间:2020-04-26 22:38:31      阅读:125      评论:0      收藏:0      [点我收藏+]

摘要:爬取CVPR2018论文的内容:标题,简介,关键字,论文链接

1、数据库表的创建(MySQL)

技术分享图片

 

 

 

注意:abstract类型为text,避免入坑

2、python爬取

技术分享图片
import requests
from bs4 import BeautifulSoup
import pymysql

headers = {
    user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36}  # 创建头部信息
url = http://openaccess.thecvf.com/CVPR2018.py
print(url)
r = requests.get(url, headers=headers)
content = r.content.decode(utf-8)

soup = BeautifulSoup(content, html.parser)
dts = soup.find_all(dt, class_=ptitle)
print(dts)
hts = http://openaccess.thecvf.com/
# 数据爬取
alllist = []
for i in range(len(dts)):
    print(这是第 + str(i) + )
    title = dts[i].a.text.strip()
    href = hts + dts[i].a[href]
    r = requests.get(href, headers=headers)
    content = r.content.decode(utf-8)
    soup = BeautifulSoup(content, html.parser)
    # print(title,href)
    divabstract = soup.find(name=div, attrs={"id": "abstract"})
    abstract = divabstract.text.strip()
    # print(‘第‘+str(i)+‘个:‘,abstract)
    alllink = soup.select(a)
    link = hts + alllink[4][href][6:]
    keyword = str(title).split( )
    keywords = ‘‘
    for k in range(len(keyword)):
        if (k == 0):
            keywords += keyword[k]
        else:
            keywords += , + keyword[k]
    value = (title, abstract, link, keywords)
    alllist.append(value)
print(alllist)
tuplist = tuple(alllist)
# 数据保存
db = pymysql.connect("localhost", "root", "123456", "lunwen", charset=utf8)
cursor = db.cursor()
sql_cvpr = "INSERT INTO lunwens(title, abstract, link, keywords) values (%s,%s,%s,%s)"
try:
    cursor.executemany(sql_cvpr, tuplist)
    db.commit()
except:
    print(执行失败,进入回调3)
    db.rollback()
db.close()
lunwen

 

Python爬取CVPR2018论文

原文:https://www.cnblogs.com/MoooJL/p/12782860.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!