Quantcast
Channel: rtrouton – Der Flounder
Viewing all articles
Browse latest Browse all 764

Backing up a Jamf Pro database hosted in Amazon Web Services’ RDS service to an S3 bucket

$
0
0

For those using Amazon Web Services to host Jamf Pro, one of the issues you may run into is how to get backups of your Jamf Pro database which you can access. AWS’s RDS service makes backups of your database to S3, but you don’t get direct access to the S3 bucket where they’re stored.

In the event that you want a backup that you can access of your RDS-hosted MySQL database, Amazon provides the option for exporting a database snapshot to an S3 bucket in your AWS account. This process will export your data in Apache Parquet format instead of a MySQL database export file.

However, it’s also possible to create and use an EC2 instance to perform the following tasks:

  1. Connect to your RDS-hosted MySQL database.
  2. Create a backup of your MySQL database using the mysqldump tool.
  3. Store the backup in an S3 bucket of your choosing.

For more details, please see below the jump.

Setting up the backup server

In order to run the backups, you’ll need to set up several resources in AWS. This includes the following:

Please use the procedure below to create the necessary resources:

1. Create an S3 bucket to store your MySQL backups in.

2. Set up an IAM role which allows an EC2 instance to have read/write access to the S3 bucket where you’ll be storing the backups.

3. Create an EC2 instance running Linux.

Note: This instance will need to have enough free space to store a complete backup of your database, so I recommend looking at the size of your database and choose the appropriate amount of disk space when you’re setting up the new instance.

4. Install the following tools on your Linux EC2 instance:

5. Attach the IAM role to your EC2 instance.

6. Create a VPC Security Group which allows your EC2 instance and RDS-hosted database to successfully communicate with each other.

Note: If you’re running Jamf Pro in AWS and you’re hosting your database in RDS, you likely have a security group like this set up already. Otherwise, your Jamf Pro server wouldn’t be able to communicate with the database.

7. Add the EC2 instance to the VPC Security Group which allows access to your RDS database.

Once all of the preparation work has been completed, use the following procedure to set up the backup process:

Note: For the purposes of this post, I’m using Red Hat Enterprise Linux (RHEL) as the Linux distro. If using another Linux distro, be aware that you may need to make adjustments for application binaries being stored in different locations than they are on RHEL.

Setting up MySQL authentication

1. Log into your EC2 instance.

2. Run the following command to change to a shell which has root privileges.

sudo -s

3. Create a MySQL connection named local using a command similar to the one below:

mysql_config_editor set --login-path=local --host=rds.database.server.url.goes.here --user=username --password

You’ll then be prompted for the password to the Jamf Pro database.

For example, if your Jamf Pro database has the following RDS URL and username:

  • URL: jamfprodb.dcjkudz4hlph.eu-west-1.rds.amazonaws.com
  • Username: jamfsw03

The following command would be used to create the MySQL connection:

mysql_config_editor set --login-path=local --host=jamfprodb.dcjkudz4hlph.eu-west-1.rds.amazonaws.com --user=jamfsw03 --password

Running this command should create a file named .mylogin.cnf in root’s home directory. To see the contents of the MySQL connection file and verify that it’s set up correctly, run the following command:

mysql_config_editor print --login-path=local

That should produce output which looks similar to what’s shown below:

[local]
user = jamfsw03
password = *****
host = jamfprodb.dcjkudz4hlph.eu-west-1.rds.amazonaws.com

Note: The reason for creating the MySQL connection is so we don’t need to store the database password as plaintext in the script.

Creating the backup script

1. Once the MySQL connection has been created, copy the script below and store it as /usr/local/bin/aws_mysql_database_backup.sh.

This script has several variables that will need to be edited. For example, if your Jamf Pro database is named jamfprodb, the S3 bucket you created is named jamfpro-database-backup and the MySQL connection you set up is named local, the following variables would look like this:

# Enter name of the RDS database being backed up

database_name=jamfprodb

# Enter name of the S3 bucket

S3_bucket=jamfpro-database-backup

# Enter the MySQL connection name

mysql_connection_name=local

This script is also available via the link below:

https://github.com/rtrouton/aws_scripts/tree/master/rds_mysql_backup_to_s3_bucket

2. Make the script executable by running the following command with root privileges:

chmod 755 /usr/local/bin/aws_mysql_database_backup.sh

3. Ensure that root owns the file by running the following command with root privileges:

chown root:root /usr/local/bin/aws_mysql_database_backup.sh

Note: The mysqldump command used in the script is set up with the following options:

  • – -max-allowed-packet=1024M
  • – -single-transaction
  • – -routines
  • – -triggers

– -max-allowed-packet=1024M: This specifies a max_allowed_packet value of 1 GB for mysqldump. This allows the packet buffer limit for mysqldump to grow beyond its default 4 MB limit to the 1 GB limit specified by the max_allowed_packet value.

– -single-transaction: Generates a checkpoint that allows the dump to capture all data prior to the checkpoint while receiving incoming changes. Those incoming changes do not become part of the dump. That ensures the same point-in-time for all tables.

– -routines: Dumps all stored procedures and stored functions.

– -triggers: Dumps all triggers for each table that has them.

These options are designed for use with InnoDB tables and provides an exact point-in-time snapshot of the data in the database. These options also do not require the MySQL tables to be locked, which in turn allows the Jamf Pro database to continue to work normally while the backup is taking place.

Scheduling the database backup:

You can set up a nightly database backup using cron. For example, if you wanted to set up a database backup to run daily at 11:30 PM, you can use the procedure below to set that up.

1. Export existing crontab by running the following command with root privileges:

crontab -l > /tmp/crontab_export

2. Export new crontab entry to exported crontab file by running the following command with root privileges:

echo "30 23 * * * /usr/local/bin/mysql_database_backup.sh 2>&1" >> /tmp/crontab_export

3. Install new cron file using exported crontab file by running the following command with root privileges:

crontab /tmp/crontab_export

 

Once everything is set up and ready to go, you should see your database backups and associated logs begin to appear in your S3 bucket.

Screen Shot 2020-02-15 at 10.52.51 PM


Viewing all articles
Browse latest Browse all 764

Trending Articles