By Robert Stober | August 20, 2012 | Cluster Management, User authentication, cluster security
Bright Cluster Manager makes so many tasks very easy to perform. This article shows how to configure user authentication against an external NIS server in Bright Cluster Manager. You can accomplish this task in two straightforward steps:
1) Configure the head node
2) Configure the compute nodes
Let's do it.
Configure the head node
Add the NIS server to the hosts file.
[root@bcm6-sl5-head1 ~]# cat /etc/hosts
# This section of this file was automatically generated by cmd. Do not edit manually!
# BEGIN AUTOGENERATED SECTION -- DO NOT REMOVE
127.0.0.1 localhost.localdomain localhost
10.141.255.254 bcm6-sl5-head1.cm.cluster bcm6-sl5-head1 master.cm.cluster master
# END AUTOGENERATED SECTION -- DO NOT REMOVE
192.168.2.6 dns1.zohallt.com dns1
Edit the yp.conf file
[root@bcm6-sl5-head1 ~]# cat /etc/yp.conf
domain zohallt.com server dns1.zohallt.com
Edit the nsswitch.conf file
[root@bcm6-sl5-head1 ~]# cat /etc/nsswitch.conf
passwd: files nis
shadow: files nis
group: files nis
Start the ypbind service
[root@bcm6-sl5-head1 ~]# service ypbind start
Setting NIS domain name zohallt.com: [ OK ]
Binding to the NIS domain: [ OK ]
Listening for an NIS domain server.
Test basic NIS functionality
[root@bcm6-sl5-head1 ~]# ypcat passwd
testuser:x:501:501::/home/testuser:/bin/bash
Configure the ypbind service to start at system boot
[root@bcm6-sl5-head1 ~]# chkconfig ypbind on
[root@bcm6-sl5-head1 ~]# chkconfig --list ypbind
ypbind 0:off 1:off 2:on 3:on 4:on 5:on 6:off
Shutdown the LDAP service
By default, Bright monitors the LDAP service and restarts it if it dies. Disable LDAP service monitoring and auto-starting so that Bright will not restart LDAP after we stop it.
Stop the LDAP service.
The LDAP service is stopped
By default, Bright periodically runs an LDAP health check, which will fail since we just stopped the LDAP service. Disable the LDAP health check as follows:
Select "Monitoring Configuration" in the resource tree, click on the "Health Check Configuration" tab, and select "All Head Nodes" from the select list. Select the "ldap" health check then press the "Edit" button.
Select the "Disabled" checkbox then press "Ok".
Press "Save".
Remove /home from exports9
Open ports on firewall
Uncomment the following lines in the /etc/shorewall/rules file.
# -- Allow NFS traffic from outside to the master
ACCEPT net fw tcp 111 # portmapper
ACCEPT net fw udp 111
ACCEPT net fw tcp 2049 # nfsd
ACCEPT net fw udp 2049
ACCEPT net fw tcp 4000 # statd
ACCEPT net fw udp 4000
ACCEPT net fw tcp 4001 # lockd
ACCEPT net fw udp 4001
ACCEPT net fw udp 4005
ACCEPT net fw tcp 4002 # mountd
ACCEPT net fw udp 4002
ACCEPT net fw tcp 4003 # rquotad
ACCEPT net fw udp 4003
Restart the shorewall service.
[root@bcm6-sl5-head1 ~]# service shorewall restart
Compiling...
Shorewall configuration compiled to /var/lib/shorewall/.restart
Restarting Shorewall....
done.
Add external mount
Reboot the head node
We've made a lot of changes, might as well reboot and test our integration so far.
[root@bcm6-sl5-head1 ~]# reboot
When then head node comes up test the NIS integration by logging in as a NIS user. If everything is configured correctly the user will successfully be authenticated against the NIS server and the home directory will be mounted.
[root@bcm6-sl5-head1 ~]# su - testuser
Creating DSA key for ssh
[testuser@bcm6-sl5-head1 ~]$ pwd
/home/testuser
Configure the compute nodes
Edit the yp.conf file in the image
[root@bcm6-sl5-head1 ~]# cat /cm/images/default-image/etc/yp.conf
domain zohallt.com server dns1.zohallt.com
Add the NIS server to the hosts file in the software image
[root@bcm6-sl5-head1 ~]# cat /cm/images/default-image/etc/hosts
# This section of this file was automatically generated by cmd. Do not edit manually!
# BEGIN AUTOGENERATED SECTION -- DO NOT REMOVE
127.0.0.1 localhost.localdomain localhost
# END AUTOGENERATED SECTION -- DO NOT REMOVE
192.168.2.6 dns1.zohallt.com dns1
Edit the nsswitch.conf file in the image
[root@bcm6-sl5-head1 ~]# cat /cm/images/default-image/etc/nsswitch.conf
passwd: files nis
shadow: files nis
group: files nis
Configure the ypbind service to start at system boot
[root@bcm6-sl5-head1 ~]# chroot /cm/images/default-image
[root@bcm6-sl5-head1 /]# chkconfig ypbind on
[root@bcm6-sl5-head1 /]# chkconfig --list ypbind
ypbind 0:off 1:off 2:on 3:on 4:on 5:on 6:off
Modify the /home mount point
Reboot the compute nodes.
Test the integration. You should be able to log into any of the nodes in the default node category as the testuser.
[testuser@bcm6-sl5-head1 ~]$ id
uid=501(testuser) gid=501(testuser) groups=501(testuser)
[testuser@bcm6-sl5-head1 ~]$ ssh node001
Last login: Fri Aug 17 10:32:09 2012 from bcm6-sl5-head1.cm.cluster
[testuser@node001 ~]$ pwd
/home/testuser
You're finished!