Getting Started Guide: EASY8


Install your Easy8 cluster by following these simple instructions:page_header_divider_line

Table of contents

Download the PDF Guide


Plan your installation

The Bright head node installer makes it easy to deploy Linux clusters. The installation wizard asks you questions and then uses your answers to configure and deploy the cluster. It installs the supported OS of your choice, sets up necessary services, configures the cluster networks, and creates a default software image that you can use to provision the compute nodes. 

Planning and working through your desired installation before is extremely important. Having a defined plan will help you provide accurate answers to the installer's questions. Bright therefore highly recommends that you read through this Getting Started Guide and prepare how you will answer the questions prior to commencing the installation.

Prepare the installation media

If you are installing on a physical head node you will need to “burn” the Bright Cluster Manager ISO to either a DVD or USB memory stick. The result will be a bootable DVD or a bootable USB memory stick. 

Windows or MacOS

Creating a bootable USB memory stick is not straightforward on most systems. Users are recommended to use third party software such as Etcher to make it easier. Etcher can be used to create a bootable USB memory stick on Linux, Windows and MacOS.

Note: Etcher is not supported or endorsed by Bright Computing.

Windows 

  • Browse to etcher.io, download and install Etcher
  • Click on "Select image" and choose the ISO
  • Click on "Select drive" and choose the USB memory stick to use
  • Click on "Flash!"
  • Wait for it to finish

It is good practice to enable "Validate write on success" within Etcher settings and wait until it validates. This greatly reduces the chance of failure during installation.

Linux

Linux users can create bootable USB installation media using the mkusb-bright.sh script, which is included within the ISO. You will need to know the correct USB device name. The output of the ‘lsblk’ command may be helpful in determining the appropriate device name. The examples below use /dev/sdx as the USB device. Please replace 'sdx' with the appropriate block device name in the commands below.

  • Mount the Bright DVD/ISO on a temporary directory to access the mkusb-bright.sh script

# mkdir /mnt/bright-dvd

# mount -o loop bright9.0-centos7u7.iso /mnt/bright-dvd

  • Run the mkusb-bright.sh script

Usage:

/mnt/bright-dvd/boot/mkusb-bright.sh <PATH_TO_BRIGHT_ISO> <USB_DEVICE>

Example:

Write Bright ISO /tmp/bright9.0-centos7u7.iso to USB device /dev/sr0

 # /mnt/bright-dvd/boot/mkusb-bright.sh -i /tmp/bright9.0-centos7u7.iso -u /dev/sdx

Configure the head node

Configure the BIOS of the head node

Open the BIOS and ensure that the device being used to boot the Bright installer from is above the hard drive in the boot order list. For example, if installing from a DVD, booting from the DVD drive should be attempted before the hard drive is. In addition, the BIOS should have the local time set.

Install the head node software

Either place the DVD into the DVD drive node or insert the USB memory stick into an available USB of the head node. Power on the head node. The head node should now boot from the installation media. If it does not, please confirm that the boot order within the BIOS is correct, and that the installation media has been created correctly. Use the up/down keys to select the option “Start the Bright Cluster Manager Graphical Installer” then press enter.

1-9

Once the head node boots the installer will be running. Click on “Start Installation”.

2-18

Accept the EULAs for Bright Cluster Manager and the selected OS on the following screen.

3

The “Kernel modules” screen displays the kernel modules which will be deployed during head node installation. Typically, no alterations will be required, and pressing he “Next” button will continue. However, if required, additional kernel modules can be loaded by clicking on the “+” button, and unwanted kernel modules can be blacklisted (removed).

Note: Kernel modules can also be configured after the installation has completed.

4

The “Hardware info” screen displays the hardware that was detected when the installer ran modprobe. If hardware which was expected to be seen is not present, press the “Back” button to load additional modules or re-order the existing modules. Otherwise, press “Next”.

5

The “DVD/ISO/USB” screen requires the selection of the installation source media. To prevent failure during installation, Bright recommends that you verify the media by pressing the “Run media integrity check” button.

6

The “General cluster settings” screen collects information which is applied to the entire cluster. Define the cluster name (spaces are allowed), provide the name of your company or organization, provide the cluster administrator’s email address, and optionally, choose to verify that Bright can send email to the administrator on first boot. 

Select your time zone, and optionally, provide a list of NTP servers to use. If your organization uses a specific set of NTP servers please enter them here. Otherwise, leave the default NTP servers configured.

Enter the DNS servers (Nameservers) one at a time. At least one DNS must be entered, unless the DNS server(s) are to be defined via DHCP, in which case leave the field blank. Additionally, a list of DNS search domains can be defined. Leave this field blank if the list of search domains is to be provided via DHCP.

Next, select the type of Linux environment modules you want to use. Bright provides both LMod (Lua modules) and TCL modules.

Scrolling the screen down, select the type of console required on your head node and compute nodes. For each, you can choose either a graphical (desktop) or text console. Typically, users opt for a graphical console on the head node, and a text console for the compute nodes.

7

The “HPC workload manager” screen, requires the selection of a preferred HPC scheduler. Select “None” if an HPC scheduler is not required, or if it is intended to configure one later.

Note: The cm-wlm-setup command or the corresponding wizard within Bright View are used to deploy workload managers (WLM) after the cluster is installed. Multiple WLM instances, including different types of WLMs can be deployed on the same cluster and coexist if required.

8

The “Network Topology” screen, requires the selection of the network topology which represents your configuration.

The type1 setup is the most common and is the default. In a type 1 setup, all the nodes (including the head nodes) are connected through a private internal network. Nodes may only communicate with entities outside of the cluster via the head node.

9

In the type 2 setup all nodes are connected on a private layer-2 network segment (e.g. a VLAN). Routable IP addresses are assigned to the nodes, which means that the compute nodes are directly reachable from outside of the cluster. Bright will run a DHCP server on the private layer-2 network segment, so it is recommended not to run any other DHCP servers on this network.

Note:  If there is already a DHCP server configured on the public network, the Bright DHCP server can be ‘locked down’, which means that it will only respond to DHCP requests from Bright nodes. However, the existing DHCP server will also need to be ‘locked down’, otherwise it might respond to DHCP requests from compute nodes intended to be used in the Bright cluster.

Multiple DHCP Image

10

In the type 3 setup the head node(s) and compute nodes are connected through separated network segments. All communication between head node(s) and compute nodes is through a router. Compute nodes and head node(s) also communicate with entities outside of the cluster through the router. Compute nodes are directly reachable from outside of the cluster. It is necessary to configure a “DHCP helper” within the router to allow DHCP traffic to be passed between the network which the compute nodes are connected to, and the network the head node(s) is connected to.

11

The “head node settings” screen requires a hostname for the head node, and a cluster administrator password. This password will be the initial root password for the cluster. Select the “hardware manufacturer” from the dropdown list. If the hardware manufacturer is not listed, or if installing Bright onto a virtual machine, choose “other”.

12

The “Compute node settings” screen requires the number of racks and the number of compute nodes you wish the installer to initially define and create. Define how the compute nodes are named by specifying the node start number, base name, and the number of digits within the compute node host names. For example, given the below settings, the installer will create three nodes: node01, node02 and node03.

13

The “BMC Configuration” screen defines how the Baseboard Management Controllers (BMCs) are configured. If neither the head node(s) nor the compute nodes have or are going to use BMCs, leave this setting as ‘No’ and continue to the next screen. Otherwise, select “Yes” under “Head Node” and “Compute Nodes”, as is appropriate. This will expand the pane(s) to reveal additional questions for the BMC configuration.

For the BMC network type within the expanded pane, choose the type of BMC being deployed. For example, if Dell hardware is being used, choose iDRAC. If unsure, or if the cluster has hardware is from multiple vendors choose IPMI.

Choose to obtain the IP address for the BMCs via DHCP or to assign a static IP address. If you choose “No” (the default), a static IP address will be assigned by Bright.

Check “Yes” to “Automatically configure the BMC when node boots”. Unless the BMCs have already been configured and are in use.

Bright needs to know which physical network the BMCs are connected to. BMCs can, and often do share one of the existing cluster networks (Typically, the internal network), however the BMCs can be connected to a dedicated management network.

If the BMC ports on the head node or compute nodes are shared physical ports, select the network which the BMC ports are connected to from the dropdown, and check “No” to “Create a new layer-3 network for BMC interfaces”.

If the BMCs are not shared ports (Dedicated physical BMC ports), and are connected to a dedicated BMC management network, select “New dedicated network” from the dropdown, check “Yes” to “Create a new layer-3 network for the BMC interfaces”, and check “Yes” to “Will the head node use a dedicated interface to reach the BMC network”.

14

The “Networks” screen has one tab for each layer-3 network which will be created. In this example we have three tabs: one for the external network, one for the internal network, and one for the dedicated BMC management network.

For the external network, DHCP is checked by default. Leave checked if it is intended that the IP address for the head nodes’ external network is to be obtained via DHCP. However, it is common to assign a static IP address to the head nodes’ external network, as per this example. If a static IP address is to be assigned, a valid and available IP address, the subnet mask and the gateway will need to be obtained from the network administrator of the external network.

15

By default, Bright uses 10.141.0.0/16 on the internal network (“internalnet”), however any valid private network addresses can be used. Bright also assigns a default domain name of “eth.cluster” and reserves a range of addresses for DHCP. Bright does not assign a default gateway. The internal network uses the head node as its default gateway.

16

Note: When a compute node PXE boots it broadcasts its MAC address. If the MAC address is not within the Bright database (Held within the head node), and it won’t be if it is the first boot of the node. The node will be issued with an initial IP address via the Bright internal DHCP server. Once the node has been identified, the node automatically reboots with its Bright assigned static IP address.

By default, Bright uses 10.148.0.0/16 for the dedicated BMC network (“idracnet”), however any valid private network addresses can be used. The “idrac” prefix is seen within this example as we selected “Dell EMC” as our hardware manufacturer previously during the head node settings. As per the internal network (“Internalnet”), by default, the BMCs dedicated network uses the head node as its default gateway, and configures the domain name to “idrac.cluster”.

Note: The network and default domain name for the BMC network will change depending upon the hardware manufacturer defined within the head node settings screen.

17-1

The “Head node network interfaces” screen provides the opportunity to review and optionally change the IP address that will be assigned to the interfaces on the head node. A row is displayed for each network interface, with entries for interface name, network, and IP address.

18

The “Compute nodes network interfaces” screen provides the opportunity to review and optionally change the IP address that will be assigned to the interfaces on the compute nodes. A row is displayed for each network interface, with entries for interface name, network, and IP offset. The IP offset is 0.0.0.0 by default. As the offset is 0.0.0.0, the first compute node will be allocated an IP address 10.141.0.1, which is the typical configuration.

20-1

The “Disk layout” screen requires the selection of the drive within the head node which Bright is to be install on. This example only shows a single drive, however multiple drives may be shown depending upon the hardware available.

The layout of the disk partitioning for the head node and the compute nodes is chosen within the two dropdown lists. The “Default Standard Layout” scheme will be selected by default for both, however if the size of the installation drive being used on the head node is less than 500 GB. Bright automatically change the selection to the “one big partition” scheme to prevent running out of space within the root partition.

There is no reason to change the disk setup on the compute nodes from default “Default Standard Layout”, however it may be preferable to gain some flexibility by using the “one big partition” scheme.

21

The “Summary” screen shows a summary of the information which has provided within the wizard. Please read this carefully to confirm the information is correct and as expected. Use the “Back” button to go back and correct anything which is incorrect.

Take a note of the head node IP addresses.

Note: Selecting the topics on the left of the screen can be used to jump back to a given section. However, the “Next” button must be used to go forward, or any changes will not be committed.

When the summary shows the correct information, press “Start” to proceed with the installation.

22

The “Installation progress” screen shows the status of the installation. Bright recommend that the “Automatically reboot after installation is complete” checkbox is checked so that no manual intervention is required to complete the installation. The installation should be completed in approximately half an hour.

23

Activate your product key

When signing up for the Bright Easy8 program, you received a product key that entitles you to use Bright Cluster Manager (BCM) to build, deploy, manage and monitor a Linux cluster of up to 8 nodes for one year. The process of using the BCM CLI to obtain a license is called “activating a product key”. The complete documentation for activating a product key can be found within chapter 4 of the Bright Cluster Manager 9.0 Installation Manual. However, if your head node has access to the Internet the procedure is very simple.

Login as the root user in the CLI of the head node either locally with a KVM or via SSH using its IP address or assigned name. Execute the ‘request-license’ command. Replace the example product key shown within the screenshot with your Easy8 product key and follow the onscreen prompts.

Note: Product keys can only be used once; once a license is generated the product key is locked.

24

Note: To prevent mistakes Bright recommend that the product key is copied from the email and pasted into the shell.

Update your cluster

Once the head node is installation has completed, the next step is to update the head node and the default software image. Updating ensures that the latest stable version of all OS and Bright Cluster Manager packages are installed and running. This action ensures that the cluster will be more secure and reliable.

The following commands should be executed on the head node CLI based upon the OS deployed.

OS

Head Node

Default Software Image

RHEL variants

# yum update

# yum --installroot=/cm/images/default-image update

SLES

# zypper refresh

# zypper up

# zypper --root /cm/images/default-image refresh

# zypper --root /cm/images/default-image up

Ubuntu

# apt update

# apt upgrade

# cm-chroot-sw-img /cm/images/default-image

# apt update

# apt upgrade

# exit


Note: The update must be carried out on both the head node image and the default software image. Two commands per OS.

 

Provision the compute nodes

The compute nodes are provisioned with the default software image which was created during head node installation. The OS contained within the default software image on the head node is the same as the OS which was installed onto the head node. For example, if the ISO was created with CentOS 7u7, then CentOS 7u7 is on your head node, and a CentOS 7u7 is default software image.

Provisioning of the compute nodes requires that the BIOS of the compute nodes be configured to PXE boot only.

Note: Before a booting node can be provisioned, Bright needs to know what its hostname is. Once Bright knows a node’s hostname, it knows which node category it belongs to, and therefore, which software image should be provisioned onto it. Node identification is the process of linking a node’s MAC address to its hostname. Bright provides several methods to identify nodes such as switch port-based node identification, which is typically used on large clusters. However, for small clusters using the node identification wizard within Bright View is simplest method of node identification.

Power on the compute nodes

Nodes are typically powered on from Bright View or CMSH using IPMI/BMC. By default, the node installer configures each node’s BMC when it is provisioned, however since the compute nodes have never been provisioned, the BMC has not yet been configured, so power control from Bright View and CMSH is not available yet. Therefore, for the first boot only, it is necessary to power on and identify each of the compute nodes one at a time. Bright recommend that you power on the first node (i.e., the node you want to identify as node001), wait about a minute then power on the second node. Repeating this process for each of the nodes. The induced time delay will make it easier to identify the nodes during the next step.

Identify the compute nodes

Open a web browser which supports HTML-5, and log into Bright View using the following URL:

https://<head node external IP>:8081/bright-view

Select Devices -> Node identification wizard

Select “MAC” for the “Apply to all visible” row within the Action column. Confirm each node by checking the left-hand side node checkboxes. To commit the changes press the “Assign” button. Bright then configures identification based upon the MAC addresses of the compute nodes. The result is that each compute node’s hostname is now mapped to its MAC addresses within the head nodes database.

The next time any identified nodes boot, Bright will automatically look up its hostname using its MAC address. Since Bright knows the hostname, Bright also knows which software image to provision. 

25

The console of a booting compute nodes shows that it is now being provisioned. Because this is the first boot, the provisioning mode is FULL, meaning that that the disks will be partitioned, file systems will be created, and the entire default software image will be provisioned onto the hard drive.

26

After approximately five minutes the compute nodes will be provisioned and will have an “UP” status. The cluster is now ready for use.

27

This completes the installation and initial configuration of Bright Cluster Manager. What you do next depends on how you intend to use the cluster.

Bright will be adding more “Getting Started Guides” during the coming weeks, so please stay tuned. If you have any issues or questions with Easy8, please refer to the Beacon User Community. If you would like to upgrade your Bright Cluster license to receive full Bright Support or to extend the scope of your cluster including node count or functionality, please contact us.