Restore to a new/existing Cluster - EcoSys - 2.0 - Installation & Upgrade - Hexagon

EcoSys Connect Installation and Configuration (Microk8s Kubernetes)

Language
English
Product
EcoSys
Search by Category
Installation & Upgrade
EcoSys Version
2.0

Restore to a new VM

Whenever a cluster or VM fails, the following steps can be used to re-create the cluster and restore to a newly created cluster.

Prerequisites

  1. A new VM setup with the minio storage directory restored from the backup.

  2. The installer/microk8s directory on the machine configured properly for the server.

Use the following steps to restore the environment,

  1. Go to the installer/microk8s directory and run ‘./create-cluster.sh’ to create the new cluster.

  2. When the create-cluster.sh script completes, run ‘cd /home/ecosys/microk8s; ./resume-create-cluster.sh’ to complete the install.

  3. Install minio into the cluster by running ‘./install-minio.sh <minio-password> <minio-storage-path>’. Example, ‘./install-minio.sh mpass1234 /home/user/minio-store’

    The minio-password must be 8 characters or longer. The password should match what the one used during the original installation

  4. Run ‘./restore-velero.sh’ to install velero for the restore.

  5. Run the command to ensure that Velero pod is started by running ‘microk8s kubectl get pods -n velero’


  6. To see the available backups, run ‘./velero backup get’

  7. To run the restore, execute‘./velero restore create --from-backup <backup name>’

  8. The restore will run for several minutes. To monitor the process, run‘./velero restore describe <restore name>’

    The restore is done when the Phase: status shows Completed.

9 After the restore is completed, check that all pods are in a running state.

  1. Run ‘microk8s kubectl get pods -n <CONNECT_NAMESPACE>’

  2. Run ‘microk8s kubectl get pods -n <AGENT_NAMESPACE>’

    1. If the IP address of the Ingress Gateway has changed, the Agent Pod will not start successfully, since it cannot connect to the config server.

    2. To edit the Ingress IP address in the agent deployment, run ‘microk8s kubectl -n <AGENT_NAMESPACE> edit deployment agent’

    3. Find the old Ingress IP address in the file and change it to the new one.

    4. Save the file and the Agent will be re-deployed and will start successfully.

10. Install the port forwarder by running ‘./install-port-forwarder.sh’

11. If the hostname for the new environment is different, the hostname needs to be updated in the gateway and virtualservice after restore. To do this edit:

  1. Run ‘microk8s kubectl -n <CONNECT_NAMESPACE> edit virtualservice’. Update any host name with the new one and save the file.

  2. Run ‘microk8s kubectl -n <CONNECT_NAMESPACE> edit gateway’. Update any host name with the new one and save the file.

12. The restore process puts Velero into a Read-Only mode. To allow future backups, run the script ‘./reset-velero-after-restore.sh’ to put Velero back into Read-Write mode.

13. Access the Connect UI to verify that Connect is fully restored.

Restore to an existing Cluster

Whenever a namespace or container is damaged, but the cluster and the resource group still exists, a restore can be done to an existing cluster.

Using the same installer/microk8s directory from the original installation, use the following steps.

  1. Run ‘./uninstall-agent.sh’ to uninstall Connect Agent.

  2. Run ‘./uninstall-connect.sh’ to uninstall Connect.

  3. Run ‘./u ninstall-connect.sh’ to uninstall Connect.

  4. Uninstall Velero by running ‘./uninstall-verlero.sh’

  5. Uninstall the port forwarder by running ;./uinstall-port-forwarder.sh’

  6. Run ‘./restore-velero.sh’ to setup Velero for the restore.

  7. To see the available backups, run ‘./velero backup get’

  8. To run the restore, execute ‘./velero restore create --from-backup <backup name>’

  9. The restore will run for several minutes. To monitor the process, run ‘./velero restore describe <restore name>’.

    The restore is done when the Phase: status shows Completed.

    1. After the restore is completed, check that all pods are in a running state

      1. Run ‘kubectl get pods -n <CONNECT_NAMESPACE>’

      2. Run ‘kubectl get pods -n <AGENT_NAMESPACE>’

    2. Install the port forwarder by running ‘./install-port-forwarder.sh’

    3. The restore process put Velero into a Read-Only mode. To allow future backups run the script ‘./reset-velero-after-restore.sh’ to put Velero back into Read-Write mode.

    4. Access the Connect UI to verify that Connect is fully restored.