Step 3: MPI (Message Passing Interface) and the Cell SDK

MPI is one of the most popular open standards for clustering/supercomputing on commodity hardware. While MPI runs on many platforms, we assume that you now have Linux running on your system. To enable it as a cluster 'node' you need to add the necessary communications layers, and then MPI itself. Here we illustrate the use of the 'yum' installer in this section. However, you can install these packages in whatever method you are most comfortable with. Finally, we include a link to the IBM Cell SDK (currently 3.0). It is helpful to compile your programs using this compiler to get the fastest performance from your MPI grid.



Step 3a - Installing SSH: ssh is used for secure network communications in MPI. We use the 'yum' installer to first install the server component.

Step 3b - NFS (Network File System) is the Unix/Linux standard for network file sharing. MPI specifically needs to find the program that you are running on your cluster on each of the cluster's nodes. NFS provides a published location for each system.

Step 3c - MPI Install/Config: With SSH and NFS in place you are ready for the MPI install and configuration. Specifically, we use the OpenMPI distribtution (though there are other implementations).

Step 3d - An MPI program: Assuming you have a proper MPI setup at this point, you can test your MPI cluster by creating and running a test program. We will use the test program "Pi.c" for this demonstration. This is a simple C code that reports Pi from each node on the cluster using MPI services.

It may not look like much, but hopefully you now have a functioning MPI cluster.

Ads By Google