中文亚洲精品无码_熟女乱子伦免费_人人超碰人人爱国产_亚洲熟妇女综合网

當(dāng)前位置: 首頁 > news >正文

企業(yè)網(wǎng)站的一般要素包括/長沙網(wǎng)站開發(fā)

企業(yè)網(wǎng)站的一般要素包括,長沙網(wǎng)站開發(fā),做機(jī)械最好的b2b網(wǎng)站,專業(yè)網(wǎng)頁制作哪家好1. Scrapy install 準(zhǔn)備知識(shí) pip 包管理Python 安裝XpathCssWindows安裝 Scrapy $>- pip install scrapy Linux安裝 Scrapy $>- apt-get install python-scrapy 2. Scrapy 項(xiàng)目創(chuàng)建 在開始爬取之前,必須創(chuàng)建一個(gè)新的Scrapy項(xiàng)目。進(jìn)入自定義的項(xiàng)目目錄中&am…

1. Scrapy install

準(zhǔn)備知識(shí)

  • pip 包管理
  • Python 安裝
  • Xpath
  • Css

Windows安裝 Scrapy

$>- pip install scrapy

Linux安裝 Scrapy

$>- apt-get install python-scrapy

2. Scrapy 項(xiàng)目創(chuàng)建

在開始爬取之前,必須創(chuàng)建一個(gè)新的Scrapy項(xiàng)目。進(jìn)入自定義的項(xiàng)目目錄中,運(yùn)行下列命令:

$>- scrapy startproject mySpider

其中, mySpider 為項(xiàng)目名稱,可以看到將會(huì)創(chuàng)建一個(gè) mySpider 文件夾,使用命令查看目錄結(jié)構(gòu)

$>- tree mySpider

3. Scrapy 自定義爬蟲類

通過Scrapy的Spider基礎(chǔ)模版順便建立一個(gè)基礎(chǔ)的爬蟲。(也可以不用Scrapy命令建立基礎(chǔ)爬蟲,)

$>- scrapy genspider gzrbSpider dayoo.com

scrapy genspider是一個(gè)命令,也是scrapy最常用的幾個(gè)命令之一。至此,一個(gè)最基本的爬蟲項(xiàng)目已經(jīng)建立完畢了.

文件描述:

序列文件名描述
1scrapy.cfg是整個(gè)Scrapy項(xiàng)目的配置文件
2settings.py是上層目錄中scrapy.cfg定義的設(shè)置文件(決定由誰去處理爬取的內(nèi)容)
3init.pyc是__init__.py的字節(jié)碼文件
4init.py作用就是將它的上級(jí)目錄變成了一個(gè)模塊 ,否則,文件夾沒有__init__.py不能作為模塊導(dǎo)入
5items.py是定義爬蟲最終需要哪些項(xiàng) (決定爬取哪些項(xiàng)目)
5pipelines.pyScrapy爬蟲爬取了網(wǎng)頁中的內(nèi)容后,這些內(nèi)容怎么處理就取決于pipelines.py如何設(shè)置 (決定爬取后的內(nèi)容怎樣處理)
6gzrbSpider.py自定義爬蟲類(決定怎么爬)

命令描述:

序列操作描述
1模擬爬廣州日?qǐng)?bào)網(wǎng)頁scrapy shell?https://www.dayoo.com
2模擬查看節(jié)點(diǎn)數(shù)據(jù)response.xpath('.//div[@class="mt35"]//ul[@class="news-list"]').extract()?

3運(yùn)行爬蟲scrapy crawl gzrbSpider

4. Scrapy 處理邏輯

文件 \spiders\gzrbSpider.py

import scrapy
from mySpider.items import MySpiderItemclass gzrbSpider(scrapy.Spider):name = "gzrbSpider"allowed_domains = ["dayoo.com/"]start_urls = ('https://www.dayoo.com',)def parse(self, response):subSelector = response.xpath('.//div[@class="mt35"]//ul[@class="news-list"]')items = []for sub in subSelector:item = MySpiderItem()item['newName'] = sub.xpath('./li/a/text()').extract()items.append(item)return items

文件 Item.py

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass MySpiderItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()newName = scrapy.Field()

文件 Setting.py

# Scrapy settings for mySpider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'mySpider'SPIDER_MODULES = ['mySpider.spiders']
NEWSPIDER_MODULE = 'mySpider.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'mySpider(+http://www.yourdomain.com)'# Obey robots.txt rules
ROBOTSTXT_OBEY = True# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'mySpider.middlewares.mySpiderSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'mySpider.middlewares.mySpiderDownloaderMiddleware': 543,
#}# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'mySpider.pipelines.mySpiderPipeline': 300,
}# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

文件 pipelines.py

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
import time# class mySpiderPipeline:
#     def process_item(self, item, spider):
#         return itemclass MySpiderPipeline(object):def process_item(self, item, spider):now = time.strftime('%Y-%m-%d', time.localtime())fileName = 'gzrb' + now + '.txt'for it in item['newName ']:with open(fileName,encoding='utf-8',mode = 'a') as fp:# fp.write(item['newName '][0].encode('utf8') + '\n\n')fp.write(it + '\n\n')return item

本文代碼結(jié)果展示:

5. Scrapy 擴(kuò)展

Xpath:

Css:

http://www.risenshineclean.com/news/607.html

相關(guān)文章:

  • wordpress好看的背景圖片/杭州優(yōu)化公司在線留言
  • 代加工廠找訂單的網(wǎng)站/網(wǎng)站營銷軟文
  • 網(wǎng)站建設(shè)費(fèi)用還是網(wǎng)絡(luò)/好看的網(wǎng)站模板
  • 建站公司排名 中企動(dòng)力/seo推廣軟件
  • 國內(nèi)優(yōu)秀網(wǎng)站網(wǎng)頁設(shè)計(jì)/百度廣告競價(jià)排名
  • 畢設(shè)做音樂網(wǎng)站/關(guān)鍵詞優(yōu)化一般收費(fèi)價(jià)格
  • thinkphp做網(wǎng)站好嗎/中國seo高手排行榜
  • 網(wǎng)站熱力圖怎么做/釣魚網(wǎng)站制作教程
  • 云南旅游網(wǎng)站建設(shè)/亞馬遜關(guān)鍵詞優(yōu)化軟件
  • 肅寧做網(wǎng)站/app開發(fā)公司排名
  • 網(wǎng)站推廣方式措施/寧波網(wǎng)絡(luò)推廣優(yōu)化公司
  • asp網(wǎng)站500錯(cuò)誤iis7/百度搜索風(fēng)云榜游戲
  • vs2012網(wǎng)站開發(fā)/怎么做一個(gè)自己的網(wǎng)站
  • 智誠外包網(wǎng)/臺(tái)州專業(yè)關(guān)鍵詞優(yōu)化
  • 心理咨詢網(wǎng)站php后臺(tái)一般需要哪些模塊/百度快速提交入口
  • 如何做p2p網(wǎng)站/seo怎么做優(yōu)化排名
  • 做網(wǎng)站圖片尺寸/學(xué)校教育培訓(xùn)機(jī)構(gòu)
  • 網(wǎng)站常用字體大小/深圳網(wǎng)絡(luò)推廣市場
  • 織夢(mèng)淘寶客網(wǎng)站/活動(dòng)營銷案例100例
  • php做網(wǎng)站最容易/b站推廣有用嗎
  • 哪些網(wǎng)站做京東的團(tuán)購/seo優(yōu)化包括哪些內(nèi)容
  • 豬八戒設(shè)計(jì)網(wǎng)站如何做兼職/營銷推廣運(yùn)營
  • 網(wǎng)站制作模板下載/公司宣傳軟文
  • 小型企業(yè)網(wǎng)站建設(shè)報(bào)告/北京seo排名服務(wù)
  • 網(wǎng)站服務(wù)器在那里找/優(yōu)秀軟文范例200字
  • 做兼職什么網(wǎng)站/企業(yè)網(wǎng)絡(luò)營銷策略案例
  • 如何做網(wǎng)站窗口/關(guān)鍵詞整站優(yōu)化
  • 建筑培訓(xùn)網(wǎng)站有哪些/重慶seo培訓(xùn)
  • 產(chǎn)品展示型的網(wǎng)站功能有哪些/網(wǎng)上推廣專員是什么意思
  • 豬八戒網(wǎng)站 怎么做兼職/品牌策劃公司介紹