The first step had already been determined and is described in section 2 and the computers were already present and accounted for. At this point the machines needed to be configured which included installing the operating system along with file system configuration. The operating system selected to run on these machines was Linux4 because of its compatibility with the Unix platform used by the rest of TJNAF and its versatility. Linux and other open source Unix derivatives are commonly chosen for the operating system of Beowulfs with Linux being the most popular[4, p. 19]. Since the internal nodes were not equipped with an internal CDROM, the operating system was installed with a MicroSolutions backpack CDROM. One notable dilemma is how to get the computer to boot using an external CDROM without any drivers or an operating system. An installation kernel with the driver for this CDROM had to be used5.
For the nodes to cooperate, they must be able to recognize each other. This involves knowing the IP addresses and establishing a network. For this project a C-class reserved network sufficed so the network address of 192.168.68.0 was selected. The front-end computer, now named hydra after the many headed mythological monster, was given the address of 192.168.68.1. The rest of the nodes were given a generic name of `b6' prepended to the last digit of the IP address. For example, node 2 had the address of 192.168.68.2 and was given the name of b02 while node 6 had the IP address of 192.168.68.6 and was named b06. The internal nodes each had all of the development libraries and a full complement of available language support installed, along with compilers and any other software that seemed useful.
A key step in setting up the nodes was to set each node to be its own name server (i.e. adding the line nameserver 127.0.0.1 to the file /etc/resolve.conf) and adding each of the nodes to the /etc/hosts file.
127.0.0.1 locahost localhost.localdomain, loopback 192.168.68.1 hydra.beowulf.trial hydra 192.168.68.2 b02.beowulf.trial b02 192.168.68.3 b03.beowulf.trial b03 192.168.68.4 b04.beowulf.trial b04 192.168.68.5 b05.beowulf.trial b05 192.168.68.6 b06.beowulf.trial b06 192.168.68.7 b07.beowulf.trial b07 192.168.68.8 b08.beowulf.trial b08 192.168.68.9 b09.beowulf.trial b09
hydra b02 b03 b04 b05 b06 b07 b08 b09
The critical point is that the .rhosts file must be in the user's home directory on the remote machine.
> export PRSH_HOSTS='b02 b03 b04 b05 b06 b07 b08 b09'A command like
> prsh -- mkdir -p ~/tmp/data_store/would create the directory tmp/data_store/ in the user's home directory. With prsh installed, the MPI libraries were installed from an RPM package9 and the cluster was ready for testing.