-
Notifications
You must be signed in to change notification settings - Fork 35
Open
Description
Issue/Feature Description:
Testcase Precondition:
- Kahu project installed in given name-space (test-kahu)
- All below pods should be up and running:
a. backup service
b. Meta service with nfs provider
c. nfs-server - Metadata location is created already
- Namespace test-ns is created and contains some of the kubernetes resources
5.Namespace restore-ns is created
Testcase steps:
- Create below backup CR using kubectl command
(use kubectl create -f <backup.yaml>)
apiVersion: kahu.io/v1
kind: Backup
metadata:
name: backup-Kahu-0001
spec:
includeNamespaces: [test-ns]
metadataLocation: nfs - Use kubectl describe backup -n test-kahu
- Get inside nfs server pod and check for the content inside mount path given
(use command kubectl exec -ti -n test-kahu /bin/sh) - Create a restore CR on new namespace (restore-ns)
apiVersion: kahu.io/v1
kind: Restore
metadata:
name: restore-Kahu-0001
Spec:
backupName: backup-Kahu-0001
namespaceMapping:
test-ns : restore-ns
excludeResources:
- name: kahu
kind: Deployment
isRegex: true
Expected Result:
- In step 2,
a) Verify that backup stage is Finished and State is Completed
b) Verify the resource list in status shows all the required resources - In step 3, verify that
a) tar file is created with name of backup
b) After untar file, 3. In step4, verify that
a) Backed up all and deployments only without kahu in its name.
For example: - kahu-123
- 123-kahu
- 123-kahu-456
Etc. are not restored up
Why this issue to fixed / feature is needed(give scenarios or use cases):
This is a testcase which is to automated to make sure deployment is properly backed up and
restored
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any
useful information for this issue)
Metadata
Metadata
Assignees
Labels
No labels