-
Notifications
You must be signed in to change notification settings - Fork 35
Open
Description
Testcase Precondition:
- Kahu project installed in given name-space (test-kahu)
- All below pods should be up and running:
a. backup service
b. Meta service with nfs provider
c. nfs-server - Metadata location is created already
- Namespace test-ns is created and contains some of the kubernetes resources
5.Namespace restore-ns is created
Testcase steps:
- Create below backup CR using kubectl command
(use kubectl create -f <backup.yaml>)
apiVersion: kahu.io/v1
kind: Backup
metadata:
name: backup-Kahu-0001
spec:
includeNamespaces: [test-ns]
metadataLocation: nfs
includeResources:
- name:
kind: Pod
isRegex: true - name:
kind: Deployment
isRegex: true
- Use kubectl describe backup -n test-kahu
- Get inside nfs server pod and check for the content inside mount path given
(use command kubectl exec -ti -n test-kahu /bin/sh) - Create a restore CR on new namespace (restore-ns)
apiVersion: kahu.io/v1
kind: Restore
metadata:
name: restore-Kahu-0001
Spec:
backupName: backup-Kahu-0001
namespaceMapping:
test-ns : restore-ns
Expected Result: - In step 2,
a) Verify that backup stage is Finished and State is Completed
b) Verify the resource list in status shows all the required resources - In step 3, verify that
a) tar file is created with name of backup
b) After untar file, All pods and the specific deployment mentioned are backuped. - In step4, verify that
a) Backed up specific deployment and pods are up in new namespace restore-ns
Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make sure deployment is properly backed up and
Restored.
Metadata
Metadata
Assignees
Labels
No labels