企業(yè)網(wǎng)站的一般要素包括/長沙網(wǎng)站開發(fā)
1. Scrapy install
準(zhǔn)備知識(shí)
- pip 包管理
- Python 安裝
- Xpath
- Css
Windows安裝 Scrapy
$>- pip install scrapy
Linux安裝 Scrapy
$>- apt-get install python-scrapy
2. Scrapy 項(xiàng)目創(chuàng)建
在開始爬取之前,必須創(chuàng)建一個(gè)新的Scrapy項(xiàng)目。進(jìn)入自定義的項(xiàng)目目錄中,運(yùn)行下列命令:
$>- scrapy startproject mySpider
其中, mySpider 為項(xiàng)目名稱,可以看到將會(huì)創(chuàng)建一個(gè) mySpider 文件夾,使用命令查看目錄結(jié)構(gòu)
3. Scrapy 自定義爬蟲類
通過Scrapy的Spider基礎(chǔ)模版順便建立一個(gè)基礎(chǔ)的爬蟲。(也可以不用Scrapy命令建立基礎(chǔ)爬蟲,)
$>- scrapy genspider gzrbSpider dayoo.com
scrapy genspider是一個(gè)命令,也是scrapy最常用的幾個(gè)命令之一。至此,一個(gè)最基本的爬蟲項(xiàng)目已經(jīng)建立完畢了.
文件描述:
序列 | 文件名 | 描述 |
---|---|---|
1 | scrapy.cfg | 是整個(gè)Scrapy項(xiàng)目的配置文件 |
2 | settings.py | 是上層目錄中scrapy.cfg定義的設(shè)置文件(決定由誰去處理爬取的內(nèi)容) |
3 | init.pyc | 是__init__.py的字節(jié)碼文件 |
4 | init.py | 作用就是將它的上級(jí)目錄變成了一個(gè)模塊 ,否則,文件夾沒有__init__.py不能作為模塊導(dǎo)入 |
5 | items.py | 是定義爬蟲最終需要哪些項(xiàng) (決定爬取哪些項(xiàng)目) |
5 | pipelines.py | Scrapy爬蟲爬取了網(wǎng)頁中的內(nèi)容后,這些內(nèi)容怎么處理就取決于pipelines.py如何設(shè)置 (決定爬取后的內(nèi)容怎樣處理) |
6 | gzrbSpider.py | 自定義爬蟲類(決定怎么爬) |
命令描述:
序列 | 操作 | 描述 |
---|---|---|
1 | 模擬爬廣州日?qǐng)?bào)網(wǎng)頁 | scrapy shell?https://www.dayoo.com |
2 | 模擬查看節(jié)點(diǎn)數(shù)據(jù) | response.xpath('.//div[@class="mt35"]//ul[@class="news-list"]').extract()? |
3 | 運(yùn)行爬蟲 | scrapy crawl gzrbSpider |
4. Scrapy 處理邏輯
文件 \spiders\gzrbSpider.py
import scrapy
from mySpider.items import MySpiderItemclass gzrbSpider(scrapy.Spider):name = "gzrbSpider"allowed_domains = ["dayoo.com/"]start_urls = ('https://www.dayoo.com',)def parse(self, response):subSelector = response.xpath('.//div[@class="mt35"]//ul[@class="news-list"]')items = []for sub in subSelector:item = MySpiderItem()item['newName'] = sub.xpath('./li/a/text()').extract()items.append(item)return items
文件 Item.py
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass MySpiderItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()newName = scrapy.Field()
文件 Setting.py
# Scrapy settings for mySpider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'mySpider'SPIDER_MODULES = ['mySpider.spiders']
NEWSPIDER_MODULE = 'mySpider.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'mySpider(+http://www.yourdomain.com)'# Obey robots.txt rules
ROBOTSTXT_OBEY = True# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'mySpider.middlewares.mySpiderSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'mySpider.middlewares.mySpiderDownloaderMiddleware': 543,
#}# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'mySpider.pipelines.mySpiderPipeline': 300,
}# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
文件 pipelines.py
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
import time# class mySpiderPipeline:
# def process_item(self, item, spider):
# return itemclass MySpiderPipeline(object):def process_item(self, item, spider):now = time.strftime('%Y-%m-%d', time.localtime())fileName = 'gzrb' + now + '.txt'for it in item['newName ']:with open(fileName,encoding='utf-8',mode = 'a') as fp:# fp.write(item['newName '][0].encode('utf8') + '\n\n')fp.write(it + '\n\n')return item