1、英语学术论文作业Hybrid Parallel Programming on GPU ClustersAbstractNowadays, NVIDIAs CUDA is a general purpose scalable parallel programming model for writing highly parallel applications. It provides several key abstractions a hierarchy of thread blocks, shared memory, and barrier synchronization. This mod
2、el has proven quite successful at programming multithreaded many core GPUs and scales transparently to hundreds of cores: scientists throughout industry and academia are already using CUDA to achieve dramatic speedups on production and research codes. In this paper, we propose a hybrid parallel prog
3、ramming approach using hybrid CUDA and MPI programming, which partition loop iterations according to the number of C1060 GPU nodes in a GPU cluster which consists of one C1060 and one S1070. Loop iterations assigned to one MPI process are processed in parallel by CUDA run by the processor cores in t
4、he same computational node.Keywords: CUDA, GPU, MPI, OpenMP, hybrid, parallel programmingI. INTRODUCTIONNowadays, NVIDIAs CUDA 1, 16 is a general purpose scalable parallel programming model for writing highly parallel applications. It provides several key abstractions a hierarchy of thread blocks, s
5、hared memory, and barrier synchronization. This model has proven quite successful at programming multithreaded many core GPUs and scales transparently to hundreds of cores: scientists throughout industry and academia are already using CUDA 1, 16 to achieve dramatic speedups on production and researc
6、h codes.In NVDIA the CUDA chip, all to the core of hundreds of ways to construct their chips, in here we will try to use NVIDIA to provide computing equipment for parallel computing. This paper proposes a solution to not only simplify the use of hardware acceleration in conventional general purpose
7、applications, but also to keep the application code portable. In this paper, we propose a parallel programming approach using hybrid CUDA, OpenMP and MPI 3 programming, which partition loop iterations according to the performance weighting of multi-core 4 nodes in a cluster. Because iterations assig
8、ned to one MPI process are processed in parallel by OpenMP threads run by the processor cores in the same computational node, the number of loop iterations allocated to one computational node at each scheduling step depends on the number of processor cores in that node.In this paper, we propose a ge
9、neral approach that uses performance functions to estimate performance weights for each node. To verify the proposed approach, a heterogeneous cluster and a homogeneous cluster were built. In ourimplementation, the master node also participates in computation, whereas in previous schemes, only slave
10、 nodes do computation work. Empirical results show that in heterogeneous and homogeneous clusters environments, the proposed approach improved performance over all previous schemes.The rest of this paper is organized as follows. In Section 2, we introduce several typical and well-known self-scheduli
11、ng schemes, and a famous benchmark used to analyze computer system performance. In Section 3, we define our model and describe our approach. Our system configuration is then specified in Section 4, and experimental results for three types of application program are presented. Concluding remarks and
12、future work are given in Section 5.II. BACKGROUND REVIEWA. History of GPU and CUDAIn the past, we have to use more than one computer to multiple CPU parallel computing, as shown in the last chip in the history of the beginning of the show does not need a lot of computation, then gradually the need f
13、or the game and even the graphics were and the need for 3D, 3D accelerator card appeared, and gradually we began to display chip for processing, began to show separate chips, and even made asimilar in their CPU chips, that is GPU. We know that GPU computing could be used to get the answers we want,
14、but why do we choose to use the GPU? This slide shows the current CPU and GPU comparison. First, we can see only a maximum of eight core CPU now, but the GPU has grown to 260 core, the core number, well know a lot of parallel programs for GPU computing, despite his relatively low frequency of core,
15、we I believe a large number of parallel computing power could be weaker than a single issue. Next, we know that there are within the GPU memory, and more access to main memory and GPU CPU GPU access on the memory capacity, we find that the speed of accessing GPU faster than CPU by 10 times, a whole
16、worse 90GB / s, This isquite alarming gap, of course, this also means that when computing the time required to access large amounts of data can have a good GPU to improve.CPU using advanced flow control such as branch predict or delay branch and a large cache to reduce memory access latency, and GPU
17、s cache and a relatively small number of flow control nor his simple, so the method is to use a lot of GPU computing devices to cover up the problem of memory latency, that is, assuming an access memory GPU takes 5 seconds of the time, but if there are 100 thread simultaneous access to, the time is
18、5 seconds, but the assumption that CPU time memory access time is 0.1 seconds, if the 100 thread access, the time is 10 seconds, therefore, GPU parallel processing can be used to hide even in access memory thanCPU speed. GPU is designed such that more transistors are devoted to data processing rathe
19、r than data caching and flow control, as schematically illustrated by Figure 1.Therefore, we in the arithmetic logic by GPU advantage, trying to use NVIDIAs multi-core available to help us a lot of computation, and we will provide NVIDIA with so many core programs, and NVIDIA Corporation to provide
20、the API of parallel programming large number of operations to carry out. We must use the form provided by NVIDIA Corporation GPU computing to run it? Not really. We can use NVIDIA CUDA, ATI CTM and apple made OpenCL (Open Computing Language), is the development of CUDA is one of the earliest and mos
21、t people at this stage in the language but with the NVIDIA CUDA only supports its own graphics card, from where we You can see at this stage to use GPU graphics card with the operator of almost all of NVIDIA, ATI also has developed its own language of CTM, APPLE also proposed OpenCL (Open Computing
22、Language), which OpenCL has been supported by NVIDIA and ATI, but ATI CTM has also given up the language of another, by the use of the previous relationship between the GPU, usually only support singleprecision floating-point operations, and in science, precision is a very important indicator, there
23、fore, introduced this year computing graphics card has to support a Double precision floating-point operations.B. CUDA ProgrammingCUDA (an acronym for Compute Unified Device Architecture) is a parallel computing 2 architecture developed by NVIDIA. CUDA is the computing engine in NVIDIA graphics proc
24、essing units or GPUs that is accessible to software developers through industry standard programming languages. The CUDA software stack is composed of several layers as illustrated in Figure 2: a hardware driver, an application programming interface (API) and its runtime, and two higher-level mathem
25、atical libraries of common usage, CUFFT 17 and CUBLAS 18. The hardware has been designed to support lightweight driver and runtime layers, resulting in high performance. CUDA architecture supports a range of computational interfaces including OpenGL 9 and Direct Compute. CUDAs parallel programming m
26、odel is designed to overcome this challenge while maintaining a low learning curve for programmer familiar with standard programming languages such as C. At its core are three key abstractions a hierarchy of thread groups, shared memories, and barrier synchronization that are simply exposed to the p
27、rogrammer as a minimal set oflanguage extensions.These abstractions provide fine-grained data parallelism and thread parallelism, nested within coarse-grained data parallelism and task parallelism. They guide the programmer to partition the problem into coarse sub-problems that can be solved indepen
28、dently in parallel, and then into finer pieces that can be solved cooperatively in parallel. Such a decomposition preserves language expressivity by allowing threads to cooperate when solving each sub-problem, and at the same time enables transparent scalability since each sub-problem can be schedul
29、ed to be solved on any of the available processor cores: A compiled CUDA program can therefore execute on any number of processor cores, and only the runtime system needs to know the physical processor count.C. CUDA Processing flowIn follow illustration, CUDA processing flow is described as Figure 3
30、 16. The first step: copy data from main memory to GPU memory, second: CPU instructs the process to GPU, third: GPU execute parallel in each core, finally: copy the result from GPU memory to main memory.III. SYSTEM HARDWAREA. Tesla C1060 GPU Computing ProcessorThe NVIDIA Tesla C1060 transforms a wor
31、kstation into a high-performance computer that outperforms a small cluster. This gives technical professionals a dedicated computing resource at their desk-side that is much faster and more energy-efficient than a shared cluster in the data center. The NVIDIA Tesla C1060 computing processor board wh
32、ich consists of 240 cores is a PCI Express 2.0 form factor computing add-in card based on the NVIDIA Tesla T10 graphics processing unit (GPU). This board is targeted as high-performance computing (HPC) solution for PCI Express systems. The Tesla C1060 15 is capable of 933GFLOPs/s13 of processing per
33、formance and comes standard with 4GB of GDDR3 memory at 102 GB/s bandwidth.A computer system with an available PCI Express *16 slot is required for the Tesla C1060. For the best system bandwidth between the host processor and the Tesla C1060, it is recommended (but not required) that theTesla C1060
34、be installed in a PCI Express 16 Gen2 slot. The Tesla C1060 is based on the massively parallel, many-core Tesla processor, which is coupled with the standard CUDA C Programming 14 environment to simplify many-core programming.B. Tesla S1070 GPU Computing SystemThe NVIDIA Tesla S1070 12 computing sys
35、tem speeds the transition to energy-efficient parallel computing 2. With 960 processor cores and a standard simplifies application development, Tesla solve the worlds most important computing challenges-more quickly and accurately. The NVIDIAComputing System is a rack-mount Tesla T10 computing proce
36、ssors. This system connects to one or two host systems via one or two PCI Express cables. A Host Interface Card (HIC) 5 is used to connect each PCI Express cable to a host. The host interface cards are compatible with both PCI Express 1x and PCI 2x systems.The Tesla S1070 GPU computing system is bas
37、ed on the T10 GPU from NVIDIA. It can be connected to a single host system via two PCI Express connections to that connected to two separate host systems via connection to each host. Each NVID corresponding PCI Express cable connects to GPUs in the Tesla S1070. If only one PCIconnected to the Tesla
38、S1070, only two of the GPUs will be used.VI COCLUSIONSIn conclusion, we propose a parallel programming approach using hybrid CUDA and MPI programming, hich partition loop iterations according to the number of C1060 GPU nodes n a GPU cluster which consist of one C1060 and one S1070.During the experim
39、ents, loop progress assigned to one MPI processor cores in the same experiments reveal that the hybrid parallel multi-core GPU currently processing with OpenMP and MPI as a powerful approach of composing high performance clusters.V CONCLUSIONS1 Download cuda, http:/ D. Gddeke, R. Strzodka, J. Mohd-Y
40、usof, P. McCormick, S. uijssen,M. Grajewski, and S. Tureka, “Exploring weak scalability for EM calculations on a GPU-enhanced cluster,” Parallel Computing,vol. 33, pp. 685-699, Nov 2007.3 P. Alonso, R. Cortina, F.J. Martnez-Zaldvar, J. Ranilla “Neville limination on multi- and many-core systems: Ope
41、nMP, MPI and UDA”, Jorunal of Supercomputing, ttp:/ Francois Bodin and Stephane Bihan, “Heterogeneous multicore arallel programming for graphics processing units”, Scientific rogramming, Volume 17, Number 4 / 2009, 325-336, Nov. 2009.5 Specification esla S1070 GPU Computing System 6 Open MP Specific
42、ation, http:/openmp.org/wp/about-openmp/7 Message Passing Interface (MPI)8 MPICH, A Portable Implementation of MPI9 OpenGL, D. Shreiner, M. Woo, J. Neider and T. Davis, OpenGL(R) rogramming Guide: The Official Guide to Learning OpenGL(R), Addison-Wesley, Reading, MA, August 2005. 10 (2008) Intel 64
43、Tesla Linux Cluster Lincoln webpage. OnlineAvailable: ttp:/www.ncsa.illinois.edu/UserInfo/Resources/Hardware/Intel64TeslCluster/11 Romain Dolbeau, Stphane Bihan, and Franois Bodin, HMPP: Alti-core Parallel Programming Environment12 The NVIDIA Tesla S1070 1U Computing System - Scalable Many Core Supe
44、rcomputing for Data Centers 13 Top 500 Super Computer Sites, What is Gflop/s,14 http:/ http:/ CUDA, http:/en.wikipedia.org/wiki/CUDA17 CUFFT, CUDA Fast Fourier Transform (FFT) library.http:/ CUBLAS, BLAS(Basic Linear Algebra Subprograms) on CUDAGPU 集 群 的 混 合 并 行 编 程摘 要 目 前 , NVIDIA 的 CUDA 是 一 种 用 于
45、编 写 高 度 并 行 的 应 用 程 序 的 通用 的 可 扩 展 的 并 行 编 程 模 型 。 它 提 供 了 几 个 关 键 的 抽 象 线 程 阻 塞 , 共 享内 存 和 屏 障 同 步 的 层 次 结 构 。 这 在 编 程 模 式 已 经 被 证 明 在 多 线 程 多 核 GP和 透 明 的 扩 展 数 以 百 计 的 核 心 的 编 程 上 相 当 成 功 : 在 整 个 工 业 界 和 学 术 界 的科 学 家 已 经 使 用 CUDA 实 现 在 生 产 上 的 显 著 的 速 度 提 升 和 代 码 研 究 。 在 本 文中 , 我 们 提 出 了 一 个 使 用 混
46、 合 CUDA 和 MPI 编 程 的 混 合 编 程 方 法 , 分 区 循环 迭 代 在 一 个 GPU 集 群 中 的 C1060 GPU 节 点 的 数 目 , 其 中 包 括 在 一 个C1060 和 S1070。 循 环 迭 代 分 配 给 一 个 由 处 理 器 在 相 同 的 计 算 节 点 的 核 心 运行 的 CUDA 并 行 处 理 过 的 MPI 进 程 。关 键 词 : CUDA, GPU, MPI, OpenMP, 混 合 , 并 行 编 程1.介 绍如 今 , NVIDIA( 英 伟 达 ) 的 CUDA1, 16是 一 种 通 用 的 编 写 高 度 可 扩展
47、的 并 行 编 程 并 行 应 用 程 序 的 模 型 。 它 提 供 了 几 个 关 键 的 抽 象 层 次 的 线程 块 , 共 享 内 存 和 障 碍 同 步 。 这 种 模 式 已 经 被 证 实 在 多 线 程 多 核 心 GPU编 程 和 从 小 规 模 扩 展 到 数 百 个 内 核 是 非 常 成 功 的 : 科 学 家 在 工 业 界 和 学 术 界都 已 经 使 用 CUDA1, 16, 生 产 和 研 发 代 码 有 实 现 显 着 的 速 度 提 升 。 在NVDIA 的 CUDA 芯 片 的 所 有 的 数 百 种 方 法 构 建 自 己 的 芯 片 里 , 在 这
48、里 我 们 将尝 试 使 用 NVIDIA( 英 伟 达 ) 提 供 用 于 并 行 计 算 的 计 算 设 备 。本 文 提 出 了 一 个 解 决 方 案 不 仅 简 化 在 传 统 的 硬 件 加 速 通 用 应 用 程 序 的 使用 ,而 且 还 保 持 应 用 程 序 代 码 的 便 携 性 。 在 这 篇 论 文 里 ,我 们 提 出 一 种 使 用混 合 CUDA,OpenMP 和 MPI3的 并 行 编 程 方 法 , 根 据 在 一 个 集 群 中 的 性 能 叠加 的 多 核 4节 点 , 它 会 分 区 循 环 迭 代 。 因 为 迭 代 处 理 分 配 给 一 个 MPI 进程 是 在 并 行 的 相 同 的 计 算 节 点 上 由 OpenMP 线 程 的 处 理 器 内 核 运 行 的 , 则循 环 迭 代 的 次 数 分 配 给 一 个 计 算 节 点 , 每 个 调 度 步 骤 取 决 于 在 该 节 点 的 处 理器 内 核 的 数 量 。在 本 文 中 , 我 们 提 出 了 一