首页 > 其他 > 详细

scrapy

时间:2015-12-03 23:18:45      阅读:273      评论:0      收藏:0      [点我收藏+]

pip install Sphinx
cd /d E:\work\soft\selenium-2.48.0\py\selenium
sphinx-quickstart

sphinx-apidoc -o outputdir packagedir
sphinx-apidoc -s .txt -o E:\work\soft\selenium-2.48.0\source E:\work\soft\selenium-2.48.0\py

 

http://blog.csdn.net/pleasecallmewhy/article/details/19642329

 

# -*- coding: utf-8 -*-
from scrapy.spider import BaseSpider
from scrapy.selector import Selector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule

from selenium import selenium

from shiyifang.items import ShiyifangItem

class ShiyifangSpider(CrawlSpider):
    name = "shiyifang"
    allowed_domains = ["taobao.com"]
    start_urls = [
        "http://www.taobao.com"
    ]

    rules = (
        Rule(SgmlLinkExtractor(allow=(‘/market/nvzhuang/index.php?spm=a217f.7297021.a214d5w.2.tvAive‘, )),
             callback=‘parse_page‘, follow=True),
    )

    def __init__(self):
        CrawlSpider.__init__(self)
        self.verificationErrors = []
        self.selenium = selenium("localhost", 4444, "*firefox", "http://www.taobao.com")
        self.selenium.start()

    def __del__(self):
        self.selenium.stop()
        print self.verificationErrors
        CrawlSpider.__del__(self)


    def parse_page(self, response):
        sel = Selector(response)
        from webproxy.items import WebproxyItem

        sel = self.selenium
        sel.open(response.url)
        sel.wait_for_page_to_load("30000")
        import time

        time.sleep(2.5)

scrapy

原文:http://www.cnblogs.com/zhang-pengcheng/p/5017893.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!