-
Notifications
You must be signed in to change notification settings - Fork 35
Open
Description
Issue/Feature Description:
Testcase Precondition:
- Kahu project installed in given name-space (test-kahu)
- All below pods should be up and running:
a. backup service
b. Meta service with nfs provider
c. nfs-server - Metadata location is created already
- Namespace test-ns1, test-ns2 is created and contains some of the kubernetes resources
5.Namespace restore-ns is created
Testcase steps: - Create below backup CR using kubectl command
(use kubectl create -f <backup.yaml>)
apiVersion: kahu.io/v1
kind: Backup
metadata:
name: backup-Kahu-0001
spec:
includeNamespaces: [test-ns1, test-ns2]
metadataLocation: nfs
includeResources:
- name:
kind: Pod
isRegex: true
- Use kubectl describe backup -n test-kahu
- Get inside nfs server pod and check for the content inside mount path given
(Use command kubectl exec -ti -n test-kahu /bin/sh) - Create a restore CR on new namespace (restore-ns)
apiVersion: kahu.io/v1
kind: Restore
metadata:
name: restore-Kahu-0001
Spec:
backupName: backup-Kahu-0001
includeNamespaces: [test-ns1]
namespaceMapping:
test-ns1: restore-ns
Expected Result: - In step 2,
a) Verify that backup stage is Finished and State is Completed
b) Verify the resource list in status shows all the required resources - In step 3, verify that
a) tar file is created with name of backup
b) After untar file, all deployments and pods are backup. but point to note, the pod which
has owner reference will not be backup - In step4, verify that
a) Backed up pod is up in namespace test-ns
Why this issue to fixed / feature is needed(give scenarios or use cases):
How to reproduce, in case of a bug:
Other Notes / Environment Information: (Please give the env information, log link or any useful information for this issue)
Metadata
Metadata
Assignees
Labels
No labels