1、课 程 实 验 报 告课程名称: 并 行 编 程 专业班级: 学 号: 姓 名: 指导教师: 报告日期: 计算机科学与技术学院1目录实验一 21. 实验目的与要求 22. 实验内容 23. 实验结果 2实验二 31. 实验目的与要求 32. 算法描述 33. 实验方案 34. 实验结果与分析 5实验三 61. 实验目的与要求 62. 算法描述 63. 实验方案 64. 实验结果与分析 7实验四 81. 实验目的与要求 82. 算法描述 83. 实验方案 84. 实验结果与分析 11实验五 121实验目的与要求 122.算法描述 123.实验方案 124.实验结果与分析 14PROJECT216
2、AIM:16HYPOTHESIS:16METHODS:16RESULT:16DICUSSION int total_hits, hitsMaxThreadNumkSpace; int sample_points_per_thread, num_threads; int main(void) int i; 4pthread_t p_threadsMaxThreadNum; pthread_attr_t attr; pthread_attr_init( pthread_attr_setscope( printf(“Enter num_threadsn“); scanf(“%d“, total_hi
3、ts = 0; sample_points_per_thread = kSamplePoints / num_threads; for(i=0; i#include #include #include #define SEED 35791246main(int argc, char* argv)int numiter=0;/loop timesdouble x,y,z,pi;int i,count; /* # of points in the 1st quadrant of unit circle */printf(“Enter the number of iterations used to
4、 estimate pi: “);scanf(“%d“,/* initialize random numbers */srand(SEED);count=0;int chunk;/ size chunk = 1;#pragma omp parallel shared(chunk) private(i,x,y,z) reduction(+:count)#pragma omp for schedule(dynamic,chunk)for ( i=0; i#include#include#include#includevoid read_num(long long int *num_point,in
5、t my_rank,MPI_Comm comm);void compute_pi(long long int num_point,long long int* num_in_cycle,long long int* local_num_point,int comm_sz,long long int *total_num_in_cycle,MPI_Comm comm,int my_rank);int main(int argc,char* argv)long long int num_in_cycle,num_point,total_num_in_cycle,local_num_point;in
6、t my_rank,comm_sz;MPI_Comm comm;MPI_Init(NULL,NULL);/初始化comm=MPI_COMM_WORLD;MPI_Comm_size(comm,/得到进程总数MPI_Comm_rank(comm,/得到进程编号read_num(/读取输入数据compute_pi(num_point,MPI_Finalize();return 0;void read_num(long long int* num_point,int my_rank,MPI_Comm comm)if(my_rank=0)printf(“please input num in sqaur
7、e n“);scanf(“%lld“,num_point);/*广播函数int MPI_Bcast(void* data_p /in/out10int count /inMPI_Datatype datatype /inint source_proc /inMPI_Comm comm /in )*/MPI_Bcast(num_point,1,MPI_LONG_LONG,0,comm);void compute_pi(long long int num_point,long long int* num_in_cycle,long long int* local_num_point,int com
8、m_sz,long long int *total_num_in_cycle,MPI_Comm comm,int my_rank)*num_in_cycle=0;*local_num_point=num_point/comm_sz;double x,y,distance_squared; srand(time(NULL);for(long long int i=0;i_global_ void kernel(double *gpu_p) int gpu_count = 0;for (int gpu_i = 1; gpu_i ( g_p);cudaMemcpy(p, g_p,1000 * siz
9、eof(double), cudaMemcpyDeviceToHost);cudaFree(g_p);for (i = 1; i = 1000; i+) printf( “%2.4ft“, pi);printf( “blocks = %d block size= %dt“,n_blocks, block_size );system(“pause“);return 0;4.实验结果与分析实验结果如下:15与前三次实验采用的蒙特卡罗法相比,积分法的精度更高,使用 cuda 计算pi 值,速度更快。16PROJECT2AIM:master the two typical parallel progr
10、am development tools (OpenMP and MPI)understand the similarities and differences between the two tools during the process of parallel program design and optimizationadjust and analyze the parallel granularity of the generated parallel algorithms because of optimizationfurther understand the principl
11、es of parallel programming and what should pay attention toHYPOTHESIS:在实验三和实验四中均采用了蒙特卡洛法计算 pi 值,分别采用 openmp 和mpi。由以上实验结果可知,相同循环次数下,采用 mpi 时耗时更短,因此假设大量运算下采用 mpi 性能更好。METHODS:采用实验三和实验四的程序,以计算 pi 值为例,分析两种方法实现蒙特卡洛算法的不同和相同。RESULT:循环次数 10000 次:采用 openmp:采用 mpi:循环次数 1000000 次:采用 openmp:采用 mpi:17由结果可知线程数较少时
12、 openmp 速度较 mpi 更快,而超过一定线程数之后openmp 速度就没有 mpi 快,但是 openmp 精度较高。DICUSSION&CONCLUSIONOpenmp 是基于内存共享的并行处理机制,而 mpi 是基于消息传递的并行处理机制。Openmp 代码编写比较简单,只需要特定的编译引导语句,就可以自动的分解多线程执行,而 mpi 较为复杂,需要考虑到进程间的相互通信。总的来说 openmp 是线程级的并行编程技术,mpi 是进程级的并行编程技术,两者有各自优点和缺点,进行程序的最优化处理时应考虑机器的特性,openmp 由于是共享存储,可扩展性较差,不适合集群,mpi 扩展性好,适合各种机器,但较为复杂, 调试麻烦。巧妙运用两者的结合会最大限度的优化程序运行。18REFERENCEMichael Wrinn, Intel Manager, Innovative Software Education19