Secret vault upgrade guide
OpsChain's secret vault storage backend has changed to be database-based rather than file-based, allowing the secret vault to operate in high availability setups. Follow this guide to upgrade to this version. Failure to do so might result in data loss.
This process is only required if you're upgrading from an OpsChain version prior to 2026-03-27 and you are using or want to use the default secret vault.
Stop OpsChain
To avoid potential data loss, you should stop the OpsChain API and workers before upgrading the secret vault.
kubectl scale deployment opschain-api --replicas=0 -n ${KUBERNETES_NAMESPACE}
kubectl scale deployment opschain-api-worker --replicas=0 -n ${KUBERNETES_NAMESPACE}
Backup the existing secret vault data
Skipping this step might result in permanent data loss. Do not proceed without a working backup of your secret vault data.
Ensure your secret vault instance is running and accessible. You can check this by accessing the secret vault UI and verifying that you can access the secrets.
You can create a backup of all secrets in your vault by running the following commands:
-
Obtain the secret vault root token
kubectl get secrets/opschain-vault-config -o jsonpath='{.data.OPSCHAIN_VAULT_TOKEN}' | base64 -dCopy the root token to your clipboard.
-
Enter the secret vault container
kubectl exec -it pod/opschain-secret-vault-0 -n ${KUBERNETES_NAMESPACE} -- /bin/sh -
Create a directory to store the backup on the vault filesystem
mkdir -p /tmp/backup -
From within the vault pod, download the medusa tool to export the secrets from the vault:
cd /tmpwget https://github.com/jonasvinther/medusa/releases/download/v0.7.3/medusa_0.7.3_linux_amd64.tar.gztar -xzvf ./medusa_0.7.3_linux_amd64.tar.gzchmod +x /tmp/medusa -
Export the secrets from the vault. Replace
<root-token>with the root token you obtained in step 1../medusa export secrets -m "kv1" --address="https://0.0.0.0:8200" --token="<root-token>" --format="json" -k > backup/secrets.json -
Exit the vault pod
exit -
Make a directory to store the backup on the host filesystem
mkdir -p /limepoint/backup -
Copy the backup file to the host filesystem
kubectl cp opschain-secret-vault-0:/tmp/backup/ /limepoint/backup/ -n ${KUBERNETES_NAMESPACE} -
Verify the backup file is created and is not empty (given you have secrets in your vault)
ls -lah /limepoint/backup/secrets.json -
Remove every
.datakey from your backup file. For example, if you have the following structure:
{
"my_secret_1": {
"data": {
"secret": "value"
}
}
}
It should be transformed into the following structure:
{
"my_secret_1": {
"secret": "value"
}
}
Update the values.yaml file
Besides the regular configuration changes to your values.yaml file, you must make the following changes to your configuration file:
- Add the
secretVault.unsealKeysetting. You can generate the unseal key as described in the OpsChain configuration variables guide.
secretVault:
unsealKey: "<unseal-key>"
# ... other settings
- Remove the
secretVault.volumesection.
With these settings, OpsChain will setup a new secret vault instance using the given seal key when you deploy it.
Delete the existing secret vault instance
Assuming your backup is safe and valid, remove the existing secret vault instance by running the following command:
kubectl delete statefulset/opschain-secret-vault -n ${KUBERNETES_NAMESPACE}
Deploy OpsChain
Deploy OpsChain using the regular Helm upgrade command. Once deployed, the new secret vault with database storage will be created and will be available in your Kubernetes namespace. Once the API runs again, it will initialize the new secret vault and save the configuration for it.
You can check the logs of the new secret vault pod to verify the secret vault is working:
kubectl -n ${KUBERNETES_NAMESPACE} logs pod/opschain-secret-vault-0 -f
You can also verify the logs for the API pod to verify it's successfully initialized the new secret vault.
kubectl -n ${KUBERNETES_NAMESPACE} logs deployment/opschain-api -f
Redeploying OpsChain will bring the API and workers back online. If you do not want anything to happen while you update your secrets, you should scale the API and workers to 0 again before continuing.
Import the backup secrets into the new secret vault
Once the API has successfully initialized the new secret vault, you can import the backup secrets into the new secret vault by running the following commands:
-
Obtain the new root token
kubectl get secrets/opschain-vault-config -o jsonpath='{.data.OPSCHAIN_VAULT_TOKEN}' | base64 -dCopy the new root token to your clipboard.
-
Copy the backup file to the vault pod
kubectl cp /limepoint/backup/secrets.json opschain-secret-vault-0:/tmp/secrets.json -n ${KUBERNETES_NAMESPACE} -
Enter the vault pod and re-install the medusa tool inside it
kubectl exec -it pod/opschain-secret-vault-0 -n ${KUBERNETES_NAMESPACE} -- /bin/shcd /tmpwget https://github.com/jonasvinther/medusa/releases/download/v0.7.3/medusa_0.7.3_linux_amd64.tar.gztar -xzvf ./medusa_0.7.3_linux_amd64.tar.gzchmod +x /tmp/medusa -
Import the secrets into the vault. Replace
<root-token>with the new root token you obtained in step 1../medusa import secrets -m "kv1" --address="https://0.0.0.0:8200" --token="<root-token>" --format="json" -k /tmp/secrets.jsonYou should see logs like
Secret successfully written to Vault [https://0.0.0.0:8200] using path ...for each secret that was imported. -
Exit the vault pod
exit -
Verify the secrets are imported by logging in to the vault UI and checking the secrets are present.
-
For each secret that has been imported, edit the secret in JSON mode and re-add the
.datakey around the secret value. For example, if you have the following secret in the UI:{
"secret": "value"
}It should become the following:
{
"data": {
"secret": "value"
}
}tipYou should only see one "data" key per secret. For example, if your secret has multiple keys, all of them will be wrapped in the same "data" key.
You can save the secret and verify it's working by accessing it from your actions and properties. Once all secrets have been properly imported, OpsChain will be fully operational again and you can resume your work. If you have scaled down the API and workers again, you can scale them back up:
kubectl scale deployment opschain-api --replicas=1 -n ${KUBERNETES_NAMESPACE}
Change the number of replicas to the original number of replicas. You can see what's configured in your values.yaml file.
kubectl scale deployment opschain-api-worker --replicas=3 -n ${KUBERNETES_NAMESPACE}