收藏 分享(赏)

(5.15.1)--Chapter5-6Inmemorycomputing-Spar.pdf

上传人:职教中国 文档编号:13774428 上传时间:2022-10-21 格式:PDF 页数:13 大小:918.54KB
下载 相关 举报
(5.15.1)--Chapter5-6Inmemorycomputing-Spar.pdf_第1页
第1页 / 共13页
(5.15.1)--Chapter5-6Inmemorycomputing-Spar.pdf_第2页
第2页 / 共13页
(5.15.1)--Chapter5-6Inmemorycomputing-Spar.pdf_第3页
第3页 / 共13页
亲,该文档总共13页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述

1、In memory Computing-Spark2Data Processing System Architecture Computing algorithmComputing ModelData processing systemComputing Platform & EngineComputing Platforms that provide various development kits and operating environmentsData storing systemData application systemComputing Models for differen

2、t types of data, such as 1. Batch Processing Model for massive data, MapReduce2. Stream Computing model for dynamic data streams, 3. Large-scale concurrent processing (MPP) model for structured data4. large-scale physical memory In-memory Computing model; 5. Data Flow Graph model; Computing Engine H

3、adoop, Spark , Storm, etc34L3- Spark Spark was initially started by Matei Zaharia at UC Berkeleys AMP Lab in 2009, and open sourced in 2010. In 2013, donated to the Apache Software Foundation. one of the most active open source big data projects, Top-Level Apache ProjectParallel processing framework

4、 based on the memory computing model.It can be built on the Hadoop platform and use the HDFS file system to store data, but a Resilient Distributed dataset (RDD) architecture is built on top of the file system for Supports efficient Distributed Memory Computing.5What is Spark6RDD( Resilient Distribu

5、ted Dataset)78Spark Driver (running on the Master node, there is also a mode of running on a Worker node) and Executor (running on the Worker node): Driver is responsible for converting the computing tasks of the application into a directed acyclic graph (DAG) Executor is responsible for completing

6、the calculation and data storage on the worker node On each worker, the Executor generates task threads for each data partition distributed to it to complete parallel calculations.9Features of Sparktransform the whole dataset but not individual element on the datasetsave the result of RDD evaluation

7、stores the intermediate result so that we can use it further RDDs are the huge collection of various data items, cannot fit into a single node and must be partitioned across various nodesCreated data can be retrieved anytime but its value cant be changedRDDs track data lineage information to reconst

8、ruct lost data automaticallyIt doesnt compute the result immediately means that execution does not start until an action is triggered. When we call some operation in RDD for transformation, it does not execute immediately. Computed results are stored in distributed memory (RAM) instead of stable storage (disk). 10Spark Components 11Spark Advantages Fast processing Flexibility In-memory computing Real-time processing Better analytics12Spark ecosystemQuestions ?

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 高等教育 > 大学课件

本站链接:文库   一言   我酷   合作


客服QQ:2549714901微博号:道客多多官方知乎号:道客多多

经营许可证编号: 粤ICP备2021046453号世界地图

道客多多©版权所有2020-2025营业执照举报