MySQL/RDS Database Backups to AWS S3 with slack Alerts using Kubernetes Cronjob
Step-by-step guide to automating MySQL database backups to AWS S3 using a Kubernetes CronJob and Alert in slack.
Introduction
Ensuring regular and secure backups of your MySQL databases is crucial for data protection and business continuity. In this post, we'll walk you through the process of automating MySQL database backups to AWS S3 using a Kubernetes CronJob. This solution leverages a bash script (available on GitHub) that takes backups of MySQL databases, compresses them, uploads them to an S3 bucket, deletes older backups based on the defined retention period, and sends Slack notifications for both successful and failed backups.
Step 1: Create an S3 Bucket
The first step is to create an S3 bucket where your database backups will be stored. Follow these steps:
- Log in to the AWS Management Console and navigate to the S3 service.
- Click on "Create bucket" and provide a unique bucket name (e.g., "your-company-mysql-backups").
- Choose the AWS region where you want to create the bucket.
- Configure any additional bucket settings as per your requirements (versioning, encryption, etc.).
- Click "Create bucket" to create the new S3 bucket.
Step 2: Configure AWS Credentials
To allow the Kubernetes CronJob to access the S3 bucket, you need to configure AWS credentials. You have two options:
Option 1: Create an IAM User with Necessary Permissions
-
Create an IAM User:
- In the AWS Management Console, navigate to the IAM service.
- Create a new IAM user or use an existing one.
- Attach a custom policy to the IAM user to grant specific S3 permissions.
-
Create a Custom Policy:
- Go to the "Policies" section in the IAM service.
- Click on "Create policy".
- Use the JSON editor to define a minimal policy. Here is an example:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::your-bucket-name" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::your-bucket-name/*" ] } ] }
- Replace
your-bucket-name
with the actual name of your S3 bucket. - Save the policy and attach it to the IAM user.
-
Retrieve AWS Credentials:
- Note down the Access Key ID and Secret Access Key for the IAM user.
Option 2: Assign an IAM Role to the Kubernetes Worker Nodes
-
Create an IAM Role for Worker Nodes:
- In the AWS Management Console, navigate to the EKS service.
- Select your EKS cluster and go to the "Compute" tab.
- Select the worker node group and click "Edit".
- Under "Node IAM Role", choose "Create new role".
-
Attach a Custom Policy to the Role:
- In the IAM service, go to the "Roles" section.
- Find the role created for the worker nodes.
- Attach a custom policy with minimal S3 permissions (similar to the one described in Option 1).
-
Example Custom Policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::your-bucket-name" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::your-bucket-name/*" ] } ] }
-
Finish Role Configuration:
- Attach the custom policy to the role.
- Finish editing the worker node group and apply the changes.
Step 3: Create Kubernetes Secrets
If you're using Option 1 (IAM User) from Step 2, you'll need to create Kubernetes secrets for the AWS credentials and MySQL database password.
-
Create the
db-backup
namespace:kubectl create namespace db-backup
-
Run the following command to create a secret for the AWS Access Key ID and Secret Access Key in the
db-backup
namespace:kubectl create secret generic aws-secret-access-key --namespace db-backup --from-literal=aws_access_key_id='YOUR_AWS_ACCESS_KEY_ID' --from-literal=aws_secret_access_key='YOUR_AWS_SECRET_ACCESS_KEY'
-
Run the following command to create a secret for the MySQL database password in the
db-backup
namespace:kubectl create secret generic target-ar-database-password --namespace db-backup --from-literal=database_password='YOUR_MYSQL_DATABASE_PASSWORD'
-
If you're using Slack notifications, create a secret for the Slack webhook URL in the
db-backup
namespace:Generating a Slack Webhook (Optional)
If you want to receive Slack notifications for your database backups, you'll need to generate a Slack webhook URL. Follow these steps:
- Log in to your Slack workspace and navigate to the desired channel where you want to receive the notifications.
- Click on the channel name to open the channel settings.
- In the "Integrations" section, click on "Add an app or integration".
- Search for "Incoming Webhooks" and click on the "Add" button.
- Choose the channel where you want to receive the notifications and click "Add Incoming Webhooks integration".
- Copy the generated Webhook URL and use it in the
slack-webhook-url
secret in your Kubernetes manifests.
kubectl create secret generic slack-webhook-url --namespace db-backup --from-literal=slack_webhook_url='YOUR_SLACK_WEBHOOK_URL'
-
To create a backup user in MySQL, you can run the following SQL commands:
CREATE USER 'backup'@'%' IDENTIFIED BY 'K1sd5w8icShifhuC743hCuot23UYGK1H2UBwejfwh'; GRANT ALL ON *.* TO 'backup'@'%'; GRANT PROCESS ON *.* TO 'backup'@'%';
- The first command creates a new user named
'backup'
with a password'K1sd5w8icShifhuC743hCuot23UYGK1H2UBwejfwh'
. The'%'
wildcard allows the user to connect from any host. - The second command grants the
'backup'
user all privileges on theligaultras
database. - The third command grants the
PROCESS
privilege to the'backup'
user, which is required for executing certain MySQL commands likeSHOW PROCESSLIST
.
- The first command creates a new user named
Note: Make sure to replace 'YOUR_AWS_ACCESS_KEY_ID'
, 'YOUR_AWS_SECRET_ACCESS_KEY'
, 'YOUR_MYSQL_DATABASE_PASSWORD'
, and 'YOUR_SLACK_WEBHOOK_URL'
with the appropriate values in kubernetes manifests.
Manifests
CronJob Manifest
This manifest can be updated to use weekly or monthly backups by modifying the schedule
field. For more information or to access the backup script, visit the GitHub repository.
For secrets first encrypt them with base64 before using them in secrets.yaml.
apiVersion: batch/v1
kind: CronJob
metadata:
name: app-database-backup-daily
namespace: db-backup
spec:
schedule: "0 23 * * *" # Run at 11 PM every day
#schedule: "*/2 * * * *" # For testing purposes, run every 2 minutes
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
suspend: false
jobTemplate:
spec:
template:
spec:
containers:
- name: app-database-backup
image: iamjanam/mysql-backup-to-s3:v1
imagePullPolicy: Always
env:
- name: AWS_ACCESS_KEY_ID
value: "your access key id"
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-secret-access-key
key: aws_secret_access_key
- name: AWS_DEFAULT_REGION
value: "eu-central-1"
- name: AWS_BUCKET_NAME
value: "app-mysqldb-backup"
- name: AWS_BUCKET_BACKUP_PATH
value: "daily" #the folder in bucket it can be daily,monthly or yearly
- name: TARGET_DATABASE_HOST
value: "db-ip-or-url"
- name: TARGET_DATABASE_PORT
value: "3306"
- name: TARGET_DATABASE_NAMES
value: "db name" #multiple dbs can be used like "dev,qa,staging"
- name: TARGET_DATABASE_USER
value: "backup"
- name: DELETE_OLDER_THAN
value: "25 days" #this must be in days , if you want to delete older than a year it should be 365 days. no other arguments except days are supported.
- name: TARGET_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: target-app-database-password
key: database_password
- name: SLACK_ENABLED
value: "true"
- name: SLACK_CHANNEL
value: "#backup-notification channels"
- name: SLACK_WEBHOOK_URL
valueFrom:
secretKeyRef:
name: slack-webhook-url
key: slack_webhook_url
restartPolicy: Never
Persistent Volume Claim (PVC)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-database-backup-pvc
namespace: db-backup
spec:
storageClassName: gp2 # Update based on your cloud provider, like in Digitalocean its do-block-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
Secrets
apiVersion: v1
kind: Secret
metadata:
name: aws-secret-access-key
namespace: db-backup
type: Opaque
data:
aws_secret_access_key: your-aws-secret-access-key
---
apiVersion: v1
kind: Secret
metadata:
name: target-app-database-password
namespace: db-backup
type: Opaque
data:
database_password: your-db-password
---
apiVersion: v1
kind: Secret
metadata:
name: slack-webhook-url
namespace: db-backup
type: Opaque
data:
slack_webhook_url: your-slack-webhook
Step 4: Update the CronJob Manifest
- Open the CronJob manifest file in a text editor.
- Update the
AWS_BUCKET_NAME
andAWS_BUCKET_BACKUP_PATH
environment variables with the name of your S3 bucket and the desired backup path within the bucket. - If you're using an IAM User (Option 1 from Step 2), uncomment the
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
environment variables and provide the appropriate values. - Update the
TARGET_DATABASE_HOST
,TARGET_DATABASE_PORT
,TARGET_DATABASE_NAMES
, andTARGET_DATABASE_USER
environment variables with the details of your MySQL database. - If you're using Slack notifications, update the
SLACK_CHANNEL
environment variable with your desired Slack channel. - Save the changes to the CronJob manifest file.
Step 5: Deploy the CronJob
-
Run the following command to deploy the CronJob to your Kubernetes cluster:
kubectl apply -f your-cronjob-manifest.yaml kubectl apply -f secrets.yaml kubectl apply -f pvc.yaml
-
Verify that the CronJob is running by executing:
kubectl get cronjobs
-
You can also check the logs of the CronJob to monitor the backup process:
kubectl logs -f <job-name>
Conclusion
By following this step-by-step guide and deploying the provided manifests, you'll have a fully configured and automated MySQL database backup solution that securely stores your backups in an AWS S3 bucket.
The Kubernetes CronJob will handle the scheduling and execution of the backup process, while the necessary AWS credentials and database information are securely stored as Kubernetes Secrets. This approach ensures that your valuable MySQL data is regularly backed up, providing a reliable mechanism for data protection and recovery in case of any unforeseen events.