How to NFS-export a directory from head node & mount on a set of nodes

    

This article describes how to NFS export a directory from the head node and mount it on a set of nodes using the Bright Cluster Manager CMSH. It only takes 8 easy steps to complete and verify this task.

We'll use /tmp as an example. Here's what we need to do:

1. NFS export /scratch on the head node
2. Mount master:/tmp on the nodes as /mnt/tmp


Let's get started.

 

 

First, let's verify that the mount point /mnt/tmp does not exist on the nodes in the default node category.

[atom-head1->device]% pexec -c default -j "ls -l /mnt"
Nodes down:         atomgpu01
[atomgpu01]
Node down

[atom01..atom03]
total 0

Next, Export the /tmp directory from the head node. Note that the name /tmp@internal is used by convention.

atom-head1]% device use master
[atom-head1->device[atom-head1]]% fsexports
[atom-head1->device[atom-head1]->fsexports]% add /tmp@internalnet
[atom-head1->device*[atom-head1*]->fsexports*[/tmp@internalnet*]]% set path /tmp
[atom-head1->device*[atom-head1*]->fsexports*[/tmp@internalnet*]]% set write true
[atom-head1->device*[atom-head1*]->fsexports*[/tmp@internalnet*]]% set async true
[atom-head1->device*[atom-head1*]->fsexports*[/tmp@internalnet*]]% set fsid 5
[atom-head1->device*[atom-head1*]->fsexports*[/tmp@internalnet*]]% set hosts internalnet
[atom-head1->device*[atom-head1*]->fsexports*[/tmp@internalnet*]]% commit

Verify the directory has been exported:

[atom-head1->device[atom-head1]->fsexports[/tmp@internalnet]]% show
Parameter                      Value
------------------------------ ------------------------------------------------
All squash                     no
Anonymous GID                  65534
Anonymous UID                  65534
Async                          yes
Extra options
Hosts                          internalnet (10.141.0.0/16)
Name                           /tmp@internalnet
Path                           /tmp
RDMA                           no
Revision
Root squash                    no
Write                          yes
fsid                           5

Also verify using the standard 'exportfs' command:

[root@atom-head1 default-image]# exportfs
/cm/shared                          10.141.0.0/16
/home                               10.141.0.0/16
/var/spool/burn                     10.141.0.0/16
/cm/node-installer/certificates     10.141.0.0/16
/cm/node-installer                  10.141.0.0/16
/tmp                                10.141.0.0/16

Now let's add the mount to the default node category. Note that the device name $localnfsserver resolves to the local NFS server, which could be the head node (in the case of local nodes) or the cloud director (in the case of cloud nodes).

[atom-head1->category[default]->fsmounts]% add /mnt/tmp
[atom-head1->category*[default*]->fsmounts*[/mnt/tmp*]]% set device $localnfsserver:/tmp
[atom-head1->category*[default*]->fsmounts*[/mnt/tmp*]]% set mountpoint /mnt/tmp
[atom-head1->category*[default*]->fsmounts*[/mnt/tmp*]]% set filesystem nfs
[atom-head1->category*[default*]->fsmounts*[/mnt/tmp*]]% commit

[atom-head1->category[default]->fsmounts[/mnt/tmp]]% list
Device                      Mountpoint (key)                 Filesystem
--------------------------- -------------------------------- ----------------
none                        /dev/pts                         devpts
none                        /proc                            proc
none                        /sys                             sysfs
none                        /dev/shm                         tmpfs
$localnfsserver:/cm/shared  /cm/shared                       nfs
$localnfsserver:/home       /home                            nfs
$localnfsserver:/tmp        /mnt/tmp                         nfs

Let's verify that we're done:

[atom-head1->device]% pexec -c default -j mount
Nodes down:         atomgpu01
[atomgpu01]
Node down

[atom01..atom03]
rootfs on / type rootfs (rw)
proc on /proc type proc (rw,relatime)
sys on /sys type sysfs (rw,relatime)
tmpfs on / type tmpfs (rw,relatime,size=0k,mpol=interleave:0)
/sysroot/proc on /proc type proc (rw,relatime)
/sysroot/sys on /sys type sysfs (rw,relatime)
/proc/bus/usb on /proc/bus/usb type usbfs (rw,relatime)
none on /dev type devtmpfs (rw,relatime,size=2017428k,nr_inodes=504357,mode=755)
none on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
none on /dev/shm type tmpfs (rw,relatime)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
master:/cm/shared on /cm/shared type nfs (rw,relatime)
master:/home on /home type nfs (rw,relatime)
master:/tmp on /mnt/tmp type nfs (rw,relatime)

Now let's list the /mnt/tmp directory on the cluster nodes:

[atom-head1->device]% pexec -c default -j "ls -l /mnt"
Nodes down:         atomgpu01
[atomgpu01]
Node down

[atom01..atom03]
total 4
drwxr-xr-x 10 2402 gopher 4096 Aug  8 12:15 tmp

And now the /mnt/tmp directory:

[atom-head1->device]% pexec -c default -j "ls -l /mnt/tmp"
Nodes down:         atomgpu01
[atomgpu01]
Node down

[atom01..atom03]
total 15512
drwxr-xr-x 2 root root     4096 Jul 31 22:57 arcetri
-rw-r--r-- 1 root root 13604922 Jul  7  2011 LinuxKit.tgz
drwx------ 2 root root    16384 Jul  7 10:41 lost+found
drwx------ 2   42   42     4096 Jul 27 06:21 orbit-gdm
drwx------ 2   42   42     4096 Jul 27 06:21 pulse-lIAzUuyS51kt
drwx------ 2 root root     4096 Jul 27 06:16 pulse-xfs6ftI4dQkC
-rw-r--r-- 1 root root   629195 Aug  6 14:25 saved.xml
-rw------- 1   48   48   317799 Jul 31 09:19 wsdl-apache-86d9865ed335cf24a654453d08a343e0
-rw------- 1   48   48   308905 Jul 31 09:19 wsdl-apache-a2a536a5a8e2531de0452475ea8eafbd
-rw------- 1   48   48   305583 Jul 31 09:19 wsdl-apache-a3b2f8a80c872439779f10de02a4a56d
-rw------- 1   48   48   311556 Jul 31 09:19 wsdl-apache-d9e05d8b6c93ebbacad5c34a497807c6
-rw------- 1   48   48   314514 Jul 31 09:19 wsdl-apache-f0fe8bcff41e819b66bb9f66358195c0

 

We're done.

High Performance Computing eBook