CDM Restore
Before you attempt to restore data from backups, make sure that you’ve implemented cloud secret management. DS instances on which you restore data must have the same master key as DS instances on which you perform backups. If you don’t share the master key, you won’t be able to restore from your backups.
This page covers three options to restore data from backups:
New CDM Using DS Backup
Creating new instances from previously backed up DS data is useful when a system disaster occurs, or when directory services are lost. It is also useful when you want to port test environment data to a production deployment.
To create new DS instances with data from a previous backup:
-
Make sure that your current Kubernetes context references the CDM cluster and the
prod
namespace. -
Update the YAML file used by the CDM to create Kubernetes secrets that contain your cloud storage credentials:
On Google Cloud
$ kubectl create secret generic cloud-storage-credentials \ --from-file=GOOGLE_CREDENTIALS_JSON=/path/to/my-sa-credential.json \ --dry-run --output yaml > /path/to/forgeops/kustomize/base/7.0/ds/base/cloud-storage-credentials.yaml
In this example, specify the path and file name of the JSON file that contains the Google service account key for my-sa-credential.json.
On AWS
$ kubectl create secret generic cloud-storage-credentials \ --from-literal=AWS_ACCESS_KEY_ID=
my-access-key
\ --from-literal=AWS_SECRET_ACCESS_KEY=my-secret-access-key
\ --dry-run --output yaml > /path/to/forgeops/kustomize/base/7.0/ds/base/cloud-storage-credentials.yamlOn Azure
$ kubectl create secret generic cloud-storage-credentials \ --from-literal=AZURE_ACCOUNT_NAME=
my-storage-account-name
\ --from-literal=AZURE_ACCOUNT_KEY=my-storage-account-access-key
\ --dry-run --output yaml > /path/to/forgeops/kustomize/base/7.0/ds/base/cloud-storage-credentials.yaml -
Configure the backup bucket location and enable the automatic restore capability:
-
Change to the /path/to/forgeops/kustomize/base/kustomizeConfig directory.
-
Open the kustomization.yaml file.
-
Set the
DSBACKUP_DIRECTORY
parameter to the location of the backup bucket. For example:On Google Cloud
DSBACKUP_DIRECTORY
"gs://my-backup-bucket"On AWS
DSBACKUP_DIRECTORY
"s3://my-backup-bucket"On Azure
DSBACKUP_DIRECTORY
"az://my-backup-bucket" -
Set the
AUTORESTORE_FROM_DSBACKUP
parameter to"true"
.
-
-
Remove your credentials from the YAML file that the CDM uses to create Kubernetes secrets:
$ kubectl create secret generic cloud-storage-credentials \ --dry-run --output yaml > /path/to/forgeops/kustomize/base/7.0/ds/base/cloud-storage-credentials.yaml
When the platform is deployed, new DS pods are created, and the data is automatically restored from the most recent backup available in the cloud storage location you have configured.
To verify that the data has been restored:
-
Use the IDM UI or platform UI.
-
Review the logs for the DS pods'
initialize
container. For example:$ kubectl logs --container initialize ds-idrepo-0
Restore All DS Directories
To restore all the DS directories in your CDM deployment from backup:
-
Delete all the PVCs attached to DS pods using the kubectl delete pvc command.
-
Because PVCs might not get deleted immediately when the pods to which they’re attached are running, stop the DS pods.
Using separate terminal windows, stop every DS pod using the kubectl delete pod command. This deletes the pods and their attached PVCs.
Kubernetes automatically restarts the DS pods after you delete them. The automatic restore feature of CDM recreates the PVCs as the pods restart by retrieving backup data from cloud storage and restoring the DS directories from the latest backup.
-
After the DS pods have come up, restart IDM pods to reconnect IDM to the restored PVCs:
-
List all the pods in the
prod
namespace. -
Delete all the pods running IDM.
-
Restore One DS Directory
In a CDM deployment that has automatic restore enabled, you can recover a failed DS pod if the latest backup is within the replication purge delay:
-
Delete the PVC attached to the failed DS pod using the kubectl delete pvc command.
-
Because the PVC might not get deleted immediately if the attached pod is running, stop the failed DS pod.
In another terminal window, stop the failed DS pod using the kubectl delete pod command. This deletes the pod and its attached PVC.
Kubernetes automatically restarts the DS pod after you delete it. The automatic restore feature of CDM recreates the PVC as the pod restarts by retrieving backup data from cloud storage and restoring the DS directory from the latest backup.
-
If the DS instance that you restored was the
ds-idrepo
instance, restart IDM pods to reconnect IDM to the restored PVC:-
List all the pods in the
prod
namespace. -
Delete all the pods running IDM.
-
For information about how to manually restore DS where the latest available backup is older than the replication purge delay, see the Restore section in the DS documentation.
Best Practices for Restoring Directories
-
Use a backup that is newer than the last replication purge.
-
When you restore a DS replica using backups that are older than the purge delay, that replica will no longer be able to participate in replication.
Reinitialize the replica to restore the replication topology.
-
If the available backups are older than the purge delay, then initialize the DS replica from an up-to-date master instance. For more information on how to initialize a replica, see Manual Initialization in the DS documentation.