首页 > 其他 > 详细

Working with Data Sources 2

时间:2016-11-17 08:03:10      阅读:227      评论:0      收藏:0      [点我收藏+]

Web Scriping:

1. We can also use requests.get to get the HTML file form a webpage.

2. If we would like to extract the content from the webpage, we can use BeautifulSoup Library.

  from bs4 import BeautifulSoup

  parser = BeautifulSoup(content, ‘html.parser‘) #initial the parser, pass the content by using BeautifulSoup

  body = parser.body # extract the <p></p> from the parser 

  p = body.p #Get body from <p></p>

  head = parser.head

  title_text = head.title.text #Get the content from <title></title>

3. We can use find_all function to find all the relevant content in the webpage. The find_all function can only being usd to bs4 elements.(tag)

  head = parser.find_all("head") # Find all the files with tag head and save them as a list into variable head.

  title = head[0].find_all("title")

  title_text = title[0].text 

4. Find_all function can also find the content by its id. Find_all always return a list.

  second_paragraph_text = parser.find_all("p", id ="second")[0].text

5. Find_all function can also find the content by class. 

  second_inner_paragraph_text = parser.find_all("p", class_= "inner-text")[1].text # "p" indicates the tag of the class.

6. We can also use CSS selector to find the specific content. Same as find_all method. selector method also works on the sb4 format and return a list.

  first_outer_text = parser.select(".outer-text")[0].text 

  second_text = parser.select("#second")[0].text

 

Working with Data Sources 2

原文:http://www.cnblogs.com/kingoscar/p/6072286.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!