站酷網(wǎng)怎么賺錢鄭州網(wǎng)站建設(shè)推廣優(yōu)化
前言
Scrapy是非常優(yōu)秀的一個(gè)爬蟲框架,基于twisted異步編程框架。yield的使用如此美妙?;谡{(diào)度器,下載器可以對(duì)scrapy擴(kuò)展編程。插件也是非常豐富,和Selenium,PlayWright集成也比較輕松。
當(dāng)然,對(duì)網(wǎng)頁中的ajax請求它是無能無力的,但結(jié)合mitmproxy幾乎無所不能:Scrapy + PlayWright模擬用戶點(diǎn)擊,mitmproxy則在后臺(tái)抓包取數(shù)據(jù),登錄一次,運(yùn)行一天。
最終,我通過asyncio把這幾個(gè)工具整合到了一起,基本達(dá)成了自動(dòng)化無人值守的穩(wěn)定運(yùn)行,一篇篇的文章送入我的ElasticSearch集群,經(jīng)過知識(shí)工廠流水線,變成知識(shí)商品。
”爬蟲+數(shù)據(jù),算法+智能“,這是一個(gè)技術(shù)人的理想。
配置與運(yùn)行
安裝:
pip install scrapy
當(dāng)前目錄下有scrapy.cfg和settings.py,即可運(yùn)行scrapy
命令行運(yùn)行:
scrapy crawl ArticleSpider
在程序中運(yùn)行有三種寫法:
from scrapy.cmdline import executeexecute('scrapy crawl ArticleSpider'.split())
采用CrawlerRunner:
# 采用CrawlerRunner
from twisted.internet.asyncioreactor import AsyncioSelectorReactor
reactor = AsyncioSelectorReactor()runner = CrawlerRunner(settings)
runner.crawl(ArticleSpider)
reactor.run()
采用CrawlerProcess
# 采用CrawlerProcess
process = CrawlerProcess(settings)
process.crawl(ArticleSpider)
process.start()
和PlayWright的集成
安裝
pip install scrapy-playwright
playwright install
playwright install firefox chromium
settings.py配置
BOT_NAME = 'ispider'SPIDER_MODULES = ['ispider.spider']TWISTED_REACTOR = 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'
DOWNLOAD_HANDLERS = {"https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler","http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
}CONCURRENT_REQUESTS = 32
PLAYWRIGHT_MAX_PAGES_PER_CONTEXT = 4
CLOSESPIDER_ITEMCOUNT = 100PLAYWRIGHT_CDP_URL = "http://localhost:9900"
爬蟲定義
class ArticleSpider(Spider):name = "ArticleSpider"custom_settings = {# "TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor",# "DOWNLOAD_HANDLERS": {# "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",# "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",# },# "CONCURRENT_REQUESTS": 32,# "PLAYWRIGHT_MAX_PAGES_PER_CONTEXT": 4,# "CLOSESPIDER_ITEMCOUNT": 100,}start_urls = ["https://blog.csdn.net/nav/lang/javascript"]def __init__(self, name=None, **kwargs):super().__init__(name, **kwargs)logger.debug('ArticleSpider initialized.')def start_requests(self):for url in self.start_urls:yield Request(url,meta={"playwright": True,"playwright_context": "first","playwright_include_page": True,"playwright_page_goto_kwargs": {"wait_until": "domcontentloaded",},},)async def parse(self, response: Response, current_page: Optional[int] = None) -> Generator:content = response.textpage = response.meta["playwright_page"]context = page.contexttitle = await page.title()while True:## 垂直滾動(dòng)下拉,不斷刷新數(shù)據(jù)page.mouse.wheel(delta_x=0, delta_y=200)time.sleep(3)pass
參考鏈接
- 官方scrapy-playwright插件
- 崔慶才丨靜覓寫的插件GerapyPlaywright