首页 技术 正文
技术 2022年11月12日
0 收藏 504 点赞 3,058 浏览 9398 个字

在上两节当中,我们爬取了360图片,但是我们需要将图片下载下来,这将如何下载和存储呢?

下边叙述一下三种情况:1、将图片下载后存储到MongoDB数据库;2、将图片下载后存储在MySQL数据库;3、将图片下载到本地文件

话不多说,直接上代码:

1、通过item定义存储字段

 # item.py
import scrapy class Bole_mode(scrapy.Item):
collection = "images" # collection为MongoDB储表名名称
table = "images" # table为MySQL的存储表名名称
id = scrapy.Field() # id
url = scrapy.Field() # 图片链接
title = scrapy.Field() # 标题
thumb = scrapy.Field() # 缩略图

2、配置settings文件获取数据库信息

 # -*- coding: utf-8 -*- # Scrapy settings for bole project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'bole' SPIDER_MODULES = ['BLZX.spiders']
NEWSPIDER_MODULE = 'BLZX.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'bole (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = False
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5' # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'bole.middlewares.BoleSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html # DOWNLOADER_MIDDLEWARES = {
# 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware':None,
# 'bole.middlewares.ProxyMiddleware':125,
# 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware':None
# } # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
"bole.pipelines.BoleImagePipeline":2,
"bole.pipelines.ImagePipeline":300,
"bole.pipelines.MongoPipeline":301,
"bole.pipelines.MysqlPipeline":302,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' # 爬取最大页数
MAX_PAGE = 50 # mongodb配置
MONGODB_URL = "localhost"
MONGODB_DB = "Images360" # MySQL配置
MYSQL_HOST = "localhost"
MYSQL_DATABASE = "images360"
MYSQL_PORT = 3306
MYSQL_USER = "root"
MYSQL_PASSWORD = "" # 本地配置
IMAGES_STORE = r"D:\spider\bole\image"

3、此处的Middlewares没有做任何修改

 # -*- coding: utf-8 -*- # Define here the models for your spider middleware
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html from scrapy import signals class BoleSpiderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects. @classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s def process_spider_input(self, response, spider):
# Called for each response that goes through the spider
# middleware and into the spider. # Should return None or raise an exception.
return None def process_spider_output(self, response, result, spider):
# Called with the results returned from the Spider, after
# it has processed the response. # Must return an iterable of Request, dict or Item objects.
for i in result:
yield i def process_spider_exception(self, response, exception, spider):
# Called when a spider or process_spider_input() method
# (from other spider middleware) raises an exception. # Should return either None or an iterable of Response, dict
# or Item objects.
pass def process_start_requests(self, start_requests, spider):
# Called with the start requests of the spider, and works
# similarly to the process_spider_output() method, except
# that it doesn’t have a response associated. # Must return only requests (not items).
for r in start_requests:
yield r def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name) class BoleDownloaderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the downloader middleware does not modify the
# passed objects. @classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s def process_request(self, request, spider):
# Called for each request that goes through the downloader
# middleware. # Must either:
# - return None: continue processing this request
# - or return a Response object
# - or return a Request object
# - or raise IgnoreRequest: process_exception() methods of
# installed downloader middleware will be called
return None def process_response(self, request, response, spider):
# Called with the response returned from the downloader. # Must either;
# - return a Response object
# - return a Request object
# - or raise IgnoreRequest
return response def process_exception(self, request, exception, spider):
# Called when a download handler or a process_request()
# (from other downloader middleware) raises an exception. # Must either:
# - return None: continue processing this exception
# - return a Response object: stops process_exception() chain
# - return a Request object: stops process_exception() chain
pass def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)

4、通过Pipeline对爬取的数据进行存储,分为MongoDB数据库存储,MySQL数据库存储,本地文件夹存储

 # -*- coding: utf-8 -*-
# ==========================MongoDB===========================
import pymongo
class MongoPipeline(object):
def __init__(self,mongodb_url,mongodb_DB):
self.mongodb_url = mongodb_url
self.mongodb_DB = mongodb_DB @classmethod
# 获取settings配置文件当中设置的MONGODB_URL和MONGODB_DB
def from_crawler(cls,crawler):
return cls(
mongodb_url=crawler.settings.get("MONGODB_URL"),
mongodb_DB=crawler.settings.get("MONGODB_DB")
) # 开启爬虫时连接MongoDB数据库
def open_spider(self,spider):
self.client = pymongo.MongoClient(self.mongodb_url)
self.db = self.client[self.mongodb_DB] def process_item(self,item,spider):
table_name = item.collection
self.db[table_name].insert(dict(item))
return item # 关闭爬虫时断开MongoDB数据库连接
def close_spider(self,spider):
self.client.close() # ============================MySQL===========================
import pymysql
class MysqlPipeline():
def __init__(self,host,database,port,user,password):
self.host = host
self.database = database
self.port = port
self.user = user
self.password = password @classmethod
# 获取settings配置文件当中设置的MySQL各个参数
def from_crawler(cls,crawler):
return cls(
host=crawler.settings.get("MYSQL_HOST"),
database=crawler.settings.get("MYSQL_DATABASE"),
port=crawler.settings.get("MYSQL_PORT"),
user=crawler.settings.get("MYSQL_USER"),
password=crawler.settings.get("MYSQL_PASSWORD")
) # 开启爬虫时连接MongoDB数据库
def open_spider(self,spider):
self.db = pymysql.connect(host=self.host,database=self.database,user=self.user,password=self.password,port=self.port,charset="utf8")
self.cursor = self.db.cursor() def process_item(self,item,spider):
data = dict(item)
keys = ",".join(data.keys()) # 字段名
values =",".join(["%s"]*len(data)) # 值
sql = "insert into %s(%s) values(%s)"%(item.table, keys, values)
self.cursor.execute(sql, tuple(data.values()))
self.db.commit()
return item # 关闭爬虫时断开MongoDB数据库连接
def close_spider(self,spider):
self.db.close() # # ============================本地===========================
import scrapy
from scrapy.exceptions import DropItem
from scrapy.pipelines.images import ImagesPipeline class ImagePipeline(ImagesPipeline): # 由于item里的url不是list,所以重写下面几个函数
def file_path(self, request, response=None, info=None):
url = request.url
file_name = url.split("/")[-1] # 将url连接的最后一部分作为文件名称
return file_name # results为item对应的图片下载的结果,他是一个list,每个元素为元组,并包含了下载成功和失败的信息
def item_completed(self, results, item, info): # 获取图片地址path
image_paths = [x["path"] for ok,x in results if ok]
if not image_paths:
raise DropItem("图片下载失败!!")
return item def get_media_requests(self, item, info): # 获取item文件里的url字段并加入队列等待被调用进行下载图片
yield scrapy.Request(item["url"])

5、最后就是spider数据爬取了

 import scrapy
import json
import sys sys.path.append(r'D:\spider\bole\item.py')
from bole.items import Bole_mode class BoleSpider(scrapy.Spider):
name = 'boleSpider' def start_requests(self):
url = "https://image.so.com/zj?ch=photography&sn={}&listtype=new&temp=1"
page = self.settings.get("MAX_PAGE")
for i in range(int(page)+1):
yield scrapy.Request(url=url.format(i*30)) def parse(self,response):
photo_list = json.loads(response.text)
item = Bole_mode()
for image in photo_list.get("list"):
item["id"] = image["id"]
item["url"] = image["qhimg_url"]
item["title"] = image["group_title"]
item["thumb"] = image["qhimg_thumb_url"]
yield item

6、最最后就是对爬取的结果展示一下呗(只展示MySQL和本地,MongoDB没打开)

(1) MySQL存储

第二十节:Scrapy爬虫框架之使用Pipeline存储

(2) 本地存储

第二十节:Scrapy爬虫框架之使用Pipeline存储

相关推荐
python开发_常用的python模块及安装方法
adodb:我们领导推荐的数据库连接组件bsddb3:BerkeleyDB的连接组件Cheetah-1.0:我比较喜欢这个版本的cheeta…
日期:2022-11-24 点赞:878 阅读:9,491
Educational Codeforces Round 11 C. Hard Process 二分
C. Hard Process题目连接:http://www.codeforces.com/contest/660/problem/CDes…
日期:2022-11-24 点赞:807 阅读:5,907
下载Ubuntn 17.04 内核源代码
zengkefu@server1:/usr/src$ uname -aLinux server1 4.10.0-19-generic #21…
日期:2022-11-24 点赞:569 阅读:6,740
可用Active Desktop Calendar V7.86 注册码序列号
可用Active Desktop Calendar V7.86 注册码序列号Name: www.greendown.cn Code: &nb…
日期:2022-11-24 点赞:733 阅读:6,493
Android调用系统相机、自定义相机、处理大图片
Android调用系统相机和自定义相机实例本博文主要是介绍了android上使用相机进行拍照并显示的两种方式,并且由于涉及到要把拍到的照片显…
日期:2022-11-24 点赞:512 阅读:8,132
Struts的使用
一、Struts2的获取  Struts的官方网站为:http://struts.apache.org/  下载完Struts2的jar包,…
日期:2022-11-24 点赞:671 阅读:5,294