200字范文,内容丰富有趣,生活中的好帮手!
200字范文 > 网络爬虫--22.【CrawlSpider实战】实现微信小程序社区爬虫

网络爬虫--22.【CrawlSpider实战】实现微信小程序社区爬虫

时间:2022-07-16 12:05:44

相关推荐

网络爬虫--22.【CrawlSpider实战】实现微信小程序社区爬虫

文章目录

一. CrawlSpider二. CrawlSpider案例1. 目录结构2. wxapp_spider.py3. items.py4. pipelines.py5. settings.py6. start.py三. 重点总结

一. CrawlSpider

现实情况下,我们需要对满足某个特定条件的url进行爬取,这时候就可以通过CrawlSpider完成。

CrawlSpider继承自Spider,只不过在之前的基础上增加了新的功能,可以定义爬取的url规则,Scrapy碰到满足条件的url都进行爬取,而不用手动的yield Request。

二. CrawlSpider案例

1. 目录结构

2. wxapp_spider.py

# -*- coding: utf-8 -*-import scrapyfrom scrapy.linkextractors import LinkExtractorfrom scrapy.spiders import CrawlSpider, Rulefrom wxapp.items import WxappItemclass WxappSpiderSpider(CrawlSpider):name = 'wxapp_spider'allowed_domains = ['wxapp-']start_urls = ['http://www.wxapp-/portal.php?mod=list&catid=1&page=1']rules = (Rule(LinkExtractor(allow=r'.+mod=list&catid=1&page=\d'), follow=True),Rule(LinkExtractor(allow=r'.+article-.+\.html'), callback='parse_detail', follow=False))def parse_detail(self, response):title = response.xpath("//h1[@class='ph']/text()").get()# print(title)author_p = response.xpath(".//p[@class='authors']")author = author_p.xpath("./a/text()").get()pub_time = author_p.xpath("./span/text()").get()# print('author:%s/pub_time:%s'%(author,pub_time))article_content = response.xpath(".//td[@id='article_content']//text()").getall()content = "".join(article_content).strip()# print(content)# print('-'*30)item = WxappItem(title=title,author=author,pub_time=pub_time,content=content)yield item

3. items.py

# -*- coding: utf-8 -*-# Define here the models for your scraped items## See documentation in:# /en/latest/topics/items.htmlimport scrapyclass WxappItem(scrapy.Item):# define the fields for your item here like:title = scrapy.Field()author = scrapy.Field()pub_time = scrapy.Field()content = scrapy.Field()

4. pipelines.py

# -*- coding: utf-8 -*-# Define your item pipelines here## Don't forget to add your pipeline to the ITEM_PIPELINES setting# See: /en/latest/topics/item-pipeline.htmlfrom scrapy.exporters import JsonLinesItemExporterclass WxappPipeline(object):def __init__(self):self.fp = open('wxjc.json','wb')self.exporter = JsonLinesItemExporter(self.fp,ensure_ascii=False,encoding='utf-8')def process_item(self, item, spider):self.exporter.export_item(item)return itemdef close_spider(self,spider):self.fp.close()

5. settings.py

# -*- coding: utf-8 -*-# Scrapy settings for wxapp project## For simplicity, this file contains only settings considered important or# commonly used. You can find more settings consulting the documentation:##/en/latest/topics/settings.html#/en/latest/topics/downloader-middleware.html#/en/latest/topics/spider-middleware.htmlBOT_NAME = 'wxapp'SPIDER_MODULES = ['wxapp.spiders']NEWSPIDER_MODULE = 'wxapp.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent#USER_AGENT = 'wxapp (+)'# Obey robots.txt rulesROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16)#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)# See /en/latest/topics/settings.html#download-delay# See also autothrottle settings and docsDOWNLOAD_DELAY = 1# The download delay setting will honor only one of:#CONCURRENT_REQUESTS_PER_DOMAIN = 16#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)#TELNETCONSOLE_ENABLED = False# Override the default request headers:DEFAULT_REQUEST_HEADERS = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8','Accept-Language': 'en','User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36'}# Enable or disable spider middlewares# See /en/latest/topics/spider-middleware.html#SPIDER_MIDDLEWARES = {# 'wxapp.middlewares.WxappSpiderMiddleware': 543,#}# Enable or disable downloader middlewares# See /en/latest/topics/downloader-middleware.html#DOWNLOADER_MIDDLEWARES = {# 'wxapp.middlewares.WxappDownloaderMiddleware': 543,#}# Enable or disable extensions# See /en/latest/topics/extensions.html#EXTENSIONS = {# 'scrapy.extensions.telnet.TelnetConsole': None,#}# Configure item pipelines# See /en/latest/topics/item-pipeline.htmlITEM_PIPELINES = {'wxapp.pipelines.WxappPipeline': 300,}# Enable and configure the AutoThrottle extension (disabled by default)# See /en/latest/topics/autothrottle.html#AUTOTHROTTLE_ENABLED = True# The initial download delay#AUTOTHROTTLE_START_DELAY = 5# The maximum download delay to be set in case of high latencies#AUTOTHROTTLE_MAX_DELAY = 60# The average number of requests Scrapy should be sending in parallel to# each remote server#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0# Enable showing throttling stats for every response received:#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)# See /en/latest/topics/downloader-middleware.html#httpcache-middleware-settings#HTTPCACHE_ENABLED = True#HTTPCACHE_EXPIRATION_SECS = 0#HTTPCACHE_DIR = 'httpcache'#HTTPCACHE_IGNORE_HTTP_CODES = []#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

6. start.py

from scrapy import cmdlinecmdline.execute("scrapy crawl wxapp_spider".split())

三. 重点总结

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。