Deploying MetalSoft using OVAs

In addition to the Kubernetes-based deployment we also provide ready-to-go virtual appliance images for both the Controller and the Agent.

The virtual images are provided in VMDK format. The URL username and password will be provided by the MetalSoft team.

An ESXI host (or other hypervizor) is required to run both the controller and the agent VMs. The host should have a minium of 24GB of RAM, 10 cores and 100GB of Disk space.

If the VMs are deployed separately, the following are the minimum resources that should be available to them:

  • controller: 8CPU 16GB RAM

  • agent: 1CPU 4GB RAM

Download the images

You should have received the URL, username and password from the MetalSoft team. Use the following to download all files. Alternatively you can use your browser to download them.

USER=
PASS=
URL=
mkdir -p metalsoft
cd metalsoft
wget -c -r --no-parent -nH --cut-dirs=1 --user "$USER" --password "$PASS" $URL

% ls -lha
total 19629760
drwxr-xr-x  6 alex  staff   192B Mar 14 12:52 .
drwx------@ 9 alex  staff   288B Mar 14 12:52 ..
-rw-r--r--  1 alex  staff   3.3G Mar  9 18:34 agent-disk001.vmdk
-rw-r--r--  1 alex  staff   6.4K Mar  9 18:34 agent.ovf
-rw-r--r--  1 alex  staff   6.1G Mar 10 12:35 controller-disk001.vmdk
-rw-r--r--  1 alex  staff   7.1K Mar 10 12:35 controller.ovf

Note that the size of the images might vary.

Installing & configuring the Controller VM

The following tutorial shows how to deploy the virtual appliance images in an ESXi server but other virtualization solutions could also be used.

1. Deploy the Controller Appliance VM in ESXI

  1. Use the Create VM button:

  1. Create a VM from an OVF for the Controller:

  1. Wait until it is uploaded and the image created:

  1. Ensure that enough resources are provided to the controller (there should be: 8CPU 16GB RAM):

2. Configure the IP of the controller image

  1. On the ESXI host manager, using the VM console, login into the operating system:

    • Username: root

    • Password: MetalsoftR0cks@$@$

  2. Edit the /etc/netplan/00-installer-config.yaml file:

    1. Set the CONTROLLER IP address in the addresses field (in my case this is 192.168.1.20/24).

    2. Set the gateway into the routes field. In my case this is 192.168.1.1 (careful not to include the /24)

    3. Set the nameserver in the nameservers/addresses field. In my case this is 192.168.1.1 (careful not to include the /24)

    4. Keep as is the 192.168.212.212 IP on the second interface (or a loopback interface) as it is used for internal communication within the controller.

    5. Apply the network configuration with netplan apply --debug

  3. Verify that the network stack is correctly configured:

    1. ping the gateway

      ping 192.168.1.1
      PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
      64 bytes from 192.168.1.1: imp_seq=1 tt1=64 time=0.674 ms
      
    2. ping the MetalSoft repo

      % ping repo.metalsoft.io
      PING repo.metalsoft.io (176.223.248.10) 56(84) bytes of data.
      64 bytes from 176.223.248.10 (176.223.248.10): imp_seq=1 tt1=62 time=0.621 m
      
    3. ping the internal IP

      ping 192.168.212.212
      PING 192.168.212.212 (192.168.212.212) 56(84) bytes of data. 64 bytes from 192.168.212.212: imp_seq=1 tt1=64 time=0.095 ms
      

    If any of the three tests above fail, check your settings, update and try again.

    Until the network is correctly configured the kubernetes pods will be down and running k get pods would return an error. That is expected. A common error is not having the correct default gateway configured.

  4. Update the IP configuration of the MetalSoft controller

    Run the following command to set the IP of the appliance and that of the agent: metalsoft-update-k8s-ips <controllerip> <agent_ip> [proxyurl]

    This command also gets an optional proxyurl param at the end if you are accessing the internet via a proxy.

    metalsoft-update-k8s-ips 192.168.1.20 192.168.1.10
    

    The above command expects kubernetes to be already running. If it fails with an error, please try the same command again in few minutes.

  5. Check that all the pods are running (optional):

    Depending on the resources allocated to the Controller VM, kubernetes will need some time to start all the pods.

    k get pods
    

    Note: The Controller tries to self-heal when it sees pods in a non-Running state for a long time. If you need to disable it, create an empty file: /etc/.ms_no_selfheal

3. Setup a hosts file entry or a DNS record

To access the controller add an entry into your host ‘hosts’ file:

  • Linux & MAC: /etc/hosts

  • Windows: c:\Windows\System32\Drivers\etc\hosts

Add an entry:

192.168.1.20 demo.metalsoft.io

4. Access the controller

The controller is now available at:

  • URL: https://demo.metalsoft.io

  • username: demo@metalsoft.io

  • password: metalsoftR0cks

Installing & configuring the Agent VM

  1. Use the Create VM button:

  1. Create a VM from an OVF for the Controller:

  1. Wait until it is uploaded and the image created:

  1. Ensure that enough resources are provided to the agent (there should be 1CPU 4GB RAM):

Configure the IP of the Agent

  1. On the ESXI host, using the agent VM console, login using the following credentials:

    • Username: root

    • Password: MetalsoftR0cks@$@$

  2. Edit the /etc/netplan/00-installer-config.yaml file:

    1. Set the AGENT IP address in the addresses field (in my case this is 192.168.1.10/24).

    2. Set the gateway into the routes field. In my case this is 192.168.1.1 (careful not to include the /24)

    3. Set the nameserver in the nameservers/addresses field. In my case this is 192.168.1.1 (careful not to include the /24)

    4. Apply the network configuration with netplan apply --debug

Connecting the Agent to the controller

The agent is now ready to connect to the controller. Follow the following steps to connect it:

  1. On the agent VM set the CONTROLLER IP to which the agent will connect to:

    metalsoft-update-controller-ip 192.168.1.20 #note the controller IP not the agent ip
    

  2. Check that the agent is connected to the controller

    1. In the controller navigate to the Datacenters section and click on the first datacenter:

    1. In the datacenter details page go to the Datacenter Agents tab and check if the agents are connected:

    If you see multiple agent microservices connected you are ready to start registering switches and servers.

Next steps

For more information consult: Consult the following for more information:

Restricted internet access

The controller needs access to the MetalSoft repository as well as the official Kubernetes repositories to pull updates. If restricted access to the internet is required follow the following guides for details:

Restarting the agents

If you need to restart the agents for any reason you can do so by logging into the agent vm and going to:

cd /opt/metalsoft/agents
docker-compose restart agents

To see the logs of the agents run:

tail -f opt/metalsoft/logs/*