• This is a nice feature provided by SWIFT i.e. Advanced options in sync.
  • You can set all these options once using the cluster.
  • These settings will be used by all syncs for that specific clusters.


Advanced Option in Sync.

  • For the advance option you need to go to the ‘All Replication’ under ‘Sync Administration’ tab, click on ‘New’ and select ‘Application Replication > Passthrough Replication’
  • Then go to the ‘Advance Options’ tab. Please find below snippet for your reference.
    1. This is Pre-Post src/dst script for K8S/OpenShift




  1. If you look at the above snippet, you will see ‘Post sync validation.’ If you select the option ‘No Post sync validation,’ it will skip the post-sync application health validation in sync. The post-sync validation typically includes waiting for all applications to become fully ready. Additionally, you can configure the ‘post sync validation retries’ and ‘retry wait’ when you do not select ‘No Post Sync validations.’
  2. If you look at the Pre/Post src sync script option, you can directly browse your pre- or post-sync scripts. In the above snippet, we have attached both pre and post sync scripts on the source side. When the sync runs, the ‘pre-src script’ will run first, then it will start the actual sync. The ‘post-src script’ will run after all objects are synced to the target. If either script fails, the sync will also fail.
  3. You can do the same on the target side by applying pre/post-dst scripts.
  4. You can also perform these actions through the CLI on the SWIFT server. Please check the CLI command below for your reference.
sc openshift sync --source OKD-GCP --src-project pk --stp-ip-type loadbalancer --pre-script-src /root/pre-script.sh --pre-script-params-src K8S OPENSHIFT --post-script-src /root/post-script.sh --post-script-params-src POD SERVICES --target-openshift OKD-AWS --dst-project pk-1 --dst-storageclass gp3-csi --dtp-ip-type loadbalancer  --all-objects –verbose


        

                    5. In above CLI command you need to give the execute permission for the scripts. When sync runs then pre-script and post script would be run as below.



--- Pre src script sync CLI output




--- Post src script sync CLI output




                       6. Now, if you want to deploy services or any configuration in YAML file in a K8S/OpenShift cluster, you can perform a sync with pre/post src/dst YAML in PT sync. Please find the snippet below for your reference. This YAML option in the same page of Pre/Post script/YAML option.


                        7. You can do same at target side as well. 

                      8. You can also perform these actions through the CLI on the SWIFT server. Please check the CLI command for YAML below for your reference.



  • sc openshift sync --source OKD-GCP --src-project pk --stp-ip-type loadbalancer --pre-yaml-src /root/pod-pre-src.yaml  --post-yaml-src /root/deployment-post-src.yaml --target-openshift OKD-AWS --dst-project pk-1 --dst-storageclass gp3-csi --dtp-ip-type loadbalancer  --jobname CHECK-PRE-POST-YAML-PT --all-objects –verbose


    --- Pre src YAML sync CLI output



    --- Post src YAML sync CLI output:



                9. We can use pre/post script/YAML in this way to customize the sync.

              10. Also note you cannot provide input during the sync; you need to provide this information before starting the sync.


    TRAI Config
    •  If you want to restrict Transient pod resource, then you can use this option.
    •  Bandwidth Throttling Config: In this option, if you want to throttle the bandwidth—for example, if you have 100 Mbps in your environment and don't want to use all the bandwidth just for transferring data—you can throttle the bandwidth. Please refer to the snippet and command below.
    • Also, you can with CLI as below.
    sc k8s sync --source GKE --src-namespace blogger-app --stp-ip-type loadbalancer --target-k8s EKS --dst-namespace blogger-app-1 --dst-storageclass standard-rwo --dst-gcp-zone us-central1-b --dtp-ip-type loadbalancer --jobname TRAI-CONFIG-SYNC --stp-cpu-limit 0.7 --stp-cpu-request 0.5 --stp-memory-limit 54 --stp-memory-request 52 --dtp-cpu-limit 1 --dtp-cpu-request 0.7 --dtp-memory-limit 68 --dtp-memory-request 55 --bw-throttle 0.01 --all-objects –verbose


  •  You can check in checkpoints or Sync progress by using below command that you will get an better idea that how we have given input for trai config.

    sc j sh --job-id <job-id-number> --show-checkpoints




    Image Registry Config

    • Suppose on the target side, we have migrated the application and our source container registry is not available. In that case, you can change the configuration. You can go through the options and update the container registry YAML file so that the image will reference the specified image registry configuration.
    • There is an Image registry mapping option: Here, you can replace your source cluster's image registry with the new registry config.
    • There is an Image Pull Secret Mapping option: In this, you can replace your source cluster's image pull secret with the new image pull secret configuration.




    Kubernetes Service Config
    • Service Type Mapping: - This option will convert the specified service to the specified new service type. For example, if the WordPress application on the source side is using a LoadBalancer and you don’t want the same service to run on the target side, you can specify the service type in the target section as mentioned in the snippet above. It will convert automatically at target side with SWIFT.
    • Service NodePort Mapping: - This option will convert the current NodePort of the service to a new input value for the target cluster. You need to specify ‘Randomize’ for the new <dst-new-nodeport> so that the NodePort will change dynamically within the target cluster’s service port range. Please refer below snippet and CLI command for the same.

  • In the above snippet, we have selected NodePort on the target side for the WordPress service, whereas the original service at the source used a LoadBalancer.
  • similarly, in the NodePort mapping section, we selected the source NodePort and chose to randomize the NodePort on the target side.
  • CLI commands is below.
sc k8s sync --source GKE --src-namespace blogger-app --stp-ip-type loadbalancer --target-k8s EKS --dst-namespace blogger-app-1 --dst-storageclass standard-rwo --dst-gcp-zone us-central1-c --dtp-ip-type loadbalancer --jobname SERVICE-CONFIG --dst-service-type-map my-wordpress:NodePort --dst-service-port-map my-wordpress:30194:_ --all-objects –verbose



Volume Sync Config
  • Here we can include and exclude the any volume The include and exclude options are mutually exclusive. For example, if we have 10 volumes and we only want to migrate five of them, or if we want to exclude just one volume, this can be done. Once we select the volumes, the related services will not be migrated
  • As you can see in the snippet, this can be done using both the GUI and the CLI.



Command:

sc k8s sync --source GKE --src-namespace blogger-app --stp-ip-type loadbalancer --target-k8s EKS --dst-namespace blogger-app-1 --dst-storageclass standard-rwo --dst-gcp-zone us-central1-b --dtp-ip-type loadbalancer --jobname VOLUME-SYNC --include-volumes pvc/data-my-wordpress-mariadb-0,pvc/my-wordpress --all-objects –verbose


sc k8s sync --source GKE --src-namespace blogger-app --stp-ip-type loadbalancer --target-k8s EKS --dst-namespace blogger-app-2 --dst-storageclass standard-rwo --dst-gcp-zone us-central1-b --dtp-ip-type loadbalancer --jobname EXCLUDE-VOLUME --exclude-volumes pvc/data-my-wordpress-mariadb-0 --all-objects --verbose 





Kubernetes Ingress Config
  • Ingress Class Mapping: - It will transfer the ingress objects to be mapped to the specified IngressClass on the target. For example, if we have HAProxy/nginx on the source side and are using a different IngressClass on the target side, this option allows you to map the ingress class services accordingly, which can be beneficial.
  • Ingress Annoataion Mappping: - This option will add annotation to ingress object. Ingress annotations in Kubernetes are used to customize and configure the behavior of the Ingress resources. An Ingress resource manages external access to services within a Kubernetes cluster, typically through HTTP and HTTPS. If no mapping is done for class, SWIFT will remove ingress class related annotations and class name from each replicated ingress. You can configure like below. Please see snippet below.


  • You can configure k8s ingress with command line as well.
sc k8s sync --source Local-k8s --src-namespace ingress --stp-ip-type nodeport --target-k8s Local-k8s --dst-namespace ingress --dst-storageclass longhorn --dtp-ip-type nodeport --jobname INGRESS-SYNC --dst-ingressclass-map example-ingress:nginx --dst-ingress-annotation-map example-ingress:kubernetes.io/ingress.class:gce --all-objects –verbose




POD Replicas Scale Config
  • In this option scale up and down will be done during the sync. Pod scaling in Kubernetes refers to the ability to automatically adjust the number of running pod instances based on the current demand or specified criteria.
  • In the below snippet, you just need to choose the ‘Object Type’ from the first block, which will display the selected ‘object instances’ in the second block. In the last block, you simply pass the desired numbers for scaling up or down and click on the ‘Plus’ sign. It will display as <Object_type:Object_instance:number>.
  • Then you can start your sync, and after the sync, you will see the number of pods.
  • Note: You cannot provide input during the sync; you need to provide this information before starting the sync.

  • When you click on the ‘Plus’ sign button, then it looks like as below.





  • You can use command as well on SWIFT server. We have scaled-up the deployment of nginx application.


sc k8s sync --source GKE --src-namespace nginx --stp-ip-type loadbalancer --target-k8s EKS --dst-namespace nginx-1 --dst-storageclass standard-rwo --dst-gcp-zone us-central1-b --dtp-ip-type loadbalancer --jobname SCALE-UP --dst-scale-pod-replicas DEPLOYMENT:nginx-1724442127:2 --all-objects –verbose




                        1. Before scaling up:

                 
                 2. After scaling up: