From Cs240aproject
(Difference between revisions)
|
|
(86 intermediate revisions not shown) |
Line 1: |
Line 1: |
- | This is the Spring 2007 project of Aydin Buluc and Fenglin Liao.
| |
| | | |
- |
| |
- | ----
| |
- |
| |
- |
| |
- | == Project Description ==
| |
- | The project is to write a 2D parallel implementation of the sparse matrix times sparse matrix routine. It is projected to be implemented in C++ using MPI. The project will have multiple iterations:
| |
- |
| |
- | 1) The simplest decomposition is the regular block checkerboard partitioning [http://www.parallab.uib.no/projects/molecul/matrix/main/node12.html] and to use a SUMMA-like algorithm on top of that. Here we assume the existence of a fast sequential algorithm for sparse matrix multiplication that is used as a block box.
| |
- |
| |
- | 2) The second way to decompose to data is to try assigning equal number of edges to each processor. The initial idea is to map each non-zero of the matrix to a point in the rectangular (m x n) region and use a quadtree [http://en.wikipedia.org/wiki/Quadtree] data structure. Here one may or may not use a sequential matrix multiplication as block box [Yet to be explored].
| |
- |
| |
- | 3) The third and the hardest way is to assign edges to processors such that the following metric is minimized.
| |
- | <math>min(max_i(communication(i) + computation(i)))</math>
| |
- |
| |
- | In other words, we try to load balance across the processors. The load balancing is generally done using graph partitioning [http://www.math.ruu.nl/publications/preprints/1238.ps] [http://medicine.osu.edu/informatics/DIGC/pubs/umit_irr01.pdf] but it is unclear how to do it for Parallel SpGEMM. Whether this load balancing is going to be done statically or dynamically is yet to be determined.
| |
- |
| |
- |
| |
- | ----
| |
- |
| |
- |
| |
- | == Resources ==
| |
- |
| |
- | Boost.MPI [http://www.generic-programming.org/~dgregor/boost.mpi/doc/mpi/getting_started.html#mpi.mpi_impl]
| |
- |
| |
- | SDSC Resources [http://www.sdsc.edu/us/resources/datastar/references.html]
| |
- |
| |
- |
| |
- | == Installing BOOST.MPI to DataStar ==
| |
- |
| |
- | 1) Download boost into Project directory.
| |
- |
| |
- | 2) Make sure mpCC works.
| |
- |
| |
- | 3) Go to "../boost/tools/jam/src" folder
| |
- |
| |
- | 4) Type "./build.sh vacpp"
| |
- |
| |
- | 5) Go to "../boost/tools/jam/src/bin.aixppc" folder and copy the "bjam" executable to "../boost" directory. (i.e. top-level boost directory)
| |
- |
| |
- | 6) Copy "../boost/tools/build/v2/user-config.jam" to $HOME and add line "using mpi; "
| |
- |
| |
- |
| |
- | ==
| |
- | Accessing source ==
| |
- |
| |
- | svn checkout svn+ssh://gauss.cs.ucsb.edu/home/aydin/CS240/SVN
| |
Current revision as of 00:32, 17 October 2011