Backup system in DA

What backup systems are available?

There are 2 backup systems.

  1. The recommended one is the DirectAdmin backup system.
    It will create a tar.gz file for each DirectAdmin account storing all information specific to that User (databases, email accounts, email data, domains, website, subdomains, etc.).
    This is the recommended method because it has a simple 1 click restore.
    It's handy for user backups and to move accounts between servers.
    This backup system has 3 different levels of interfaces, all which create a file with the same contents:

    • Admin Level -> Admin Backup/Transfer
    • Reseller Level -> Manage User Backups
    • User Level -> Site Backup
  2. The other backup system is:

    • Admin Level -> System Backup

    This is an interface to the 3rd party "sysbk" backup script.
    The System Backup will backup all data including config files for services (like the /etc/exim.conf, directadmin.conf, etc.,) that are not part of the DA backup.
    This tool can also be customized to add/remove paths that you would like.
    However, this tool does not have a 1 click restore option.
    NOTE: If you want to use Remote FTP for System Backup, then "ncftp" package must be installed !!!

Admin Backup/Transfer GUI method

Using the GUI, you may set up daily backups for all the accounts via Admin level -> Admin Backup/Transfer. Press "Schedule" to enter the Setup Wizard.

step1: Select "All users" if you want to back up all of the users, otherwise choose "Selected users" and specify which of them should be backed up.

Note that "All users" will include all new accounts that will be created in the future as well.

step2: Select Cron Schedule and add the desired cron time. For example, for backups to perform every night at 00:00, use the following syntax:

  • Minute: 0
  • Hour: 0
  • Day of month: *
  • Month: *
  • Day of week: *

Cron settings may be customized in a lot of different ways. You may find https://crontab.guruopen in new window useful for configuring your crons with the correct syntax.

step3: You may choose where to store backups, either locally or remotely. For example, to store backups remotely, click the FTP checkbox and select the required fields for the FTP connection.

You may also want to overwrite backups, which may be configured in the "Append" section. For example, to overwrite backups every week and keep only the last 7 backups for every day, select the "Day of Week: " option.

Of course, you may choose any other option or even customize your own append strategy using strftime syntax (more information may be found here: http://strftime.netopen in new window).

step4: Now select what kind of data you may store. To create full backups, simply click "All data".

If you only want to store a specific part of user data, for example, SQL databases, you may select "Selected Data" and then leave the "Database settings" and "database data" checkboxes active.

Finally, make sure that all settings are set up correctly and press "Schedule". You may always view/modify configured backup jobs under Admin level -> Admin Backup/Transfer.

Backup/Restore settings

User-level Backup/Restore settings

Several backup/restore settings can be selected when you are generating/restoring a backup as a User. These backups are stored in /home/USERNAME/backups/ (replace USERNAME with the actual username) and may also be called Site Backups. There is no option to schedule a backup via this interface. The following options are available via checkboxes displayed in the User-level Dashboard Advanced Features > Create/Restore Backups interface:

  • Site Backup
    • Website Data
    • Domains Directory: Backs up all user files for all domains
    • Subdomains Lists: Backs up the list of subdomains for each domain
  • E-mail
    • E-mail Accounts List for all domains (names and passwords)
    • E-mail Data: Includes the messages from the Inbox, IMAP Folders, and webmail data.
    • E-mail Settings: Includes the filters and the catchall address.
    • Forwarders: Includes all forwarding addresses.
    • Autoresponders: Includes all autoresponders and messages.
    • Vacation Messages: Includes all vacation messages and times.
    • Vacation Messages: Includes all vacation messages and times.
  • FTP
    • FTP Accounts
    • FTP Settings
  • Database
    • Database Settings: Backs up all DB Users and DB Settings
    • Database Data
  • Trash
    • Deleted Trash Data

Reseller-level Backup/Restore settings

A reseller can configure their Users' backup/restore settings via the Reseller-level here: Reseller Tools > Manage User Backups > Backup/Restore Settings. These are stored in the /home/RESELLER_NAME/user_backups/ directory (replace RESELLER_NAME with the actual reseller username) by default (though other paths are configurable). The options include:

  • Send a message when a backup has finished.
  • Restore with local NameServers OR Use NS values from backup.
  • Restore with SPF values from backup OR Use local SPF values.

Admin-level Backup/Restore settings

While general backup/restore settings can be configured when generating/restoring a backup, global backup/restore settings can be set via the DirectAdmin Admin-level Admin Tools > Admin Backup/Transfer > BACKUP/RESTORE SETTINGS. These are stored in the /home/admin/admin_backups/ directory be default (though other paths are configurable). These include the following options:

  • To send a message when a backup has finished
  • Select what nameservers will be used during the restore (local or those from backup).
  • Select what SPF values have to be restored (local or those from backup).
  • To check for domain conflict in /etc/virtual/domainowners file, rather than the named.conf or remote named.conf files.

Also included via the Admin's BACKUP/RESTORE SETTINGS are the following additional directadmin.conf values:

Backup Creation and Scheduling options

General options that are available to both Resellers and Admins when scheduling a backup include the following:

  • WHO
    • All Users
    • All Users Except Selected
    • Selected Users
      • Users (contains a checkbox list of users to select from. Only visible if either option "All Users Except Selected" or "Selected Users" is selected.)
    • Skip Suspended
  • WHEN
    • Now
    • Cron Schedule
      • Cron Settings
        • Minute
        • Hour
        • Day of Month
        • Month
        • Day of Week
  • WHERE
    • Local
    • Local path (configurable, default: /home/admin/admin_backups)
    • FTP
      • FTP Settings
        • IP
        • Username
        • Password
        • Remote Path (default: / )
        • Port (default: 21 )
        • Secure FTP
    • Append (default: Nothing )
  • WHAT
    • All Data
    • Selected Data (displays the following checkboxes if selected):
      • Domains Directory
      • Subdomain Lists
      • FTP Accounts
      • FTP Settings
      • Database Settings
      • Database Data
      • Forwarders
      • E-mail Accounts
      • E-mail Data
      • E-mail Settings
      • Vacation Messages
      • Autoresponders
      • Mailing Lists
      • Deleted Trash Data
      • All / None (links to reset checkbox selections to either all selected or none selected)

The Backup Progress Monitor

A backup progress monitoring featureopen in new window has been added. After you initiate a backup and that backup has started, reload your CMD_ADMIN_BACKUP (Admin Backup/Transfer) interface (hit F5 a few times) for the table to show up. You may see a tab labeled "IN PROGRESS", and upon clicking, you will see the backup progress monitoring table containing the backup's details and progress of the backup process displayed via a progress bar.

The directadmin.conf variable track_task_queue_processes controls this feature. The default is track_task_queue_processes=1, which gives a process overview of who's being backed up, how many Users there are, the progress made in the backup process displayed via a progress bar. You can change this to track_task_queue_processes=2 for a much more detailed track consisting of a dump of any tracked process location to a log file, which will scroll in DA as it goes.

Custom append values in backup path

The backup locations could be customized with 'custom append values'. Actually, you can build your own schema.

Though there is currently no native retention functionality with the admin backup/restore option, but one can be somewhat implemented using the custom append paths. Here are a few examples to give you an idea:

  1. Daily Backups Custom Append Path: none Result: Backups are overwritten each day and no backups are retained except for the most recent day's worth.

  2. Daily Backups Custom Append Path: Weekday Result: Each backup is placed in the folder ending with the day of the week, and thus, this effectively sets up a retention of 7 for daily backups. For example, Monday, and backup is made and is stored in the directory with -monday appended. When Monday rolls back around, the daily backup runs again and this backup is overwritten.

  3. Weekly backups Custom Append Path: Week number Result: 1 Backup per week, retain 4. Since there are 4 weeks in a month, on week-1 of each month, that backup will be overwritten, as will week-2's backup, week-3's and week-4s backup every month.

  4. Monthly backups Custom Append Path: month Result: One backup per month with a retention of 12 (since each month's backup will be overwritten each new year).

  5. Daily Backups Custom Append Path: Full Date Result: Retains ALL Backups and may eventually fill the drive. If you prefer the full date path, then you may consider making a cron that removes all files older than X days from the backups path. So, if you wanted to keep the path naming convention, but retain only 8 of these daily backups at a time, you could remove all files older than 8 days from the /home/admin/admin_backups/*/ path.

Considering the flexibility of cron scheduling, and combining this with the ability to use a custom append path, even using a custom strftime value in this path (https://www.directadmin.com/features.php?id=1565), the possibilities for configuring a backup retention policy are immense.

Alternatively, you could use borg backups via CLI to configure a retention policy.

The custom append path uses strftime to generate a path where the variables used are swapped with time strings for the given time.

You can create your own paths with this tool:
http://strftime.netopen in new window

But keep in mind that not all substitutions are going to be allowed by DA.

The following characters are allowed in the field:

%
a-z
A-Z
0-9
/
-
.
_

However, strftime swaps the variable combinations that you enter with other strings.

The resulting string must be a valid path and must also not contain the following characters:

: ,%

which could be generated from the strftime function.

Some common example uses and what they will generate are shown below:

%F      2014-03-05
%A      Wednesday
%B      March
%m/%d   03/28

Because this tool allows for some creative path making, it's up to the Admin to ensure all destination folders exist.

IMPORTANT

If you're creating a dynamic path in your /home/user directory, be very aware of what paths DA will skip in the backups. For example, this would be okay:

/home/admin/admin_backups/Wednesday

because Wednesday is below admin_backups.

However if you use the following:

/home/admin/admin_backups_Wednesday

this is not a skipped folder. This means that if you backup the "admin" account, and save it to admin_backups_Wednesday, it will recursively and possibly infinitely back itself up, filling your disk.

How to enable zstd compression for backups

DirectAdmin supports the tar.zst format, using the zstd compression, which is far better than gzip in terms of space used as well as compression and decompression performance.

To tell DA to use zstd for backups:

da config-set zstd 1
da config-set backup_gzip 2

Which folders are skipped from backup?

There are certain folders within the /home/username directory that will be skipped during the backup procedure. This is because they would either cause loops (backing up the backup) or for the case that an account is inside a chroot jail and does not need all of the copied binaries and libraries to be included.

The list of skipped folders from the User's home directory is as follows:

backups
user_backups
admin_backups
usr
bin
etc
lib
lib64
tmp
var
sbin
dev

Data within these folders (e.g., /home/user/var/*) is not included in the backup. This can be used to your advantage if you wish to have data in the User's home but do not wish to include that data in the backups. You could create a symbolic link to it from within the public_html, for example.

In DirectAdmin version 1.48, the new ability has been added to skip paths from users tar.gz backup filesopen in new window.

Any other path could be skipped by creating one or both of the following files:

  • Per User: /usr/local/directadmin/data/users/username/skip_backup_home_files.list
  • Global: /usr/local/directadmin/data/admin/skip_backup_home_files.list

Where the Global file is used if the Per User file doesn't exist. If the Per User file exists, it will be used (the files are not merged).

In these files, you can list files or folders below the given User's /home/username path, such that they are skipped and not added into the backup.

Note, that this list will remove the values from a directory listing that DA has of /home/user, so this means you cannot add sub/paths.

Sample valid values:

Maildir
application_backups

And these are invalid values that won't be skipped:

some/specific/path.txt

This doesn't work because it won't be in /home/user, only "some" would be in the list. So, you can only skip complete folders, starting from /home/username.

Using "some" in the list would be valid, but of course, it would skip everything in that folder, not just the specific path.txt file.

How to include a date in backup filenames

If you'd like to change the backup filename from user.admin.username.tar.gz to user.admin.username.2012-10-15-23-32.tar.gz

where the date represents YYYY-MM-DD-HH-MM, e.g.,

Mon Oct 15 23:32:13 MDT 2012

You can create a script called /usr/local/directadmin/scripts/custom/user_backup_post.sh and fill it with the following code.

#!/bin/sh

#set this as needed
RESELLER=admin

BACKUP_PATH=`echo $file | cut -d/ -f1,2,3,4`
REQUIRED_PATH=/home/$RESELLER/admin_backups

if [ "$BACKUP_PATH" = "$REQUIRED_PATH" ]; then
   if [ "`echo $file | cut -d. -f4,5`" = "tar.gz" ]; then
       NEW_FILE=`echo $file | cut -d. -f1,2,3`.`date +%F-%H-%M`.tar.gz
       if [ -s "$file" ] && [ ! -e "$NEW_FILE" ]; then
           mv $file $NEW_FILE
       fi
   fi
fi
exit 0;

The code below only sets the date format for locally created backups, created by "admin" in /home/user/admin_backups, but can be changed.

And make it executable:

chmod 755 /usr/local/directadmin/scripts/custom/user_backup_post.sh

This method is useful when DA won't accept certain characters when using custom append values in the backup path via the GUI.

How to backup SQL files with CustomBuild

Before any changes, or for regular intervals, you might want to make backups of your databases, which are not specific to the normal DirectAdmin User backup system. This guide will explain how to backup the .sql files on their own using CustomBuild.

Backup

da build set mysql_backup yes
da build mysql_backup

If you want to prevent future calls to da build mysql_backup from overwriting these files, rename the backup folder:

mv mysql_backups mysql_backups.`date +%F`

Note that updating MySQL with mysql_backup=yes set in the options.conf will re-dump the database to the mysql_backups directory.

Note: These .sql files contain the "DROP DATABASE" and "CREATE DATABASE" commands, unlike the .sql files in the DA User backups, so they cannot be easily interchanged.

Restore

If disaster hits, and you need to restore these .sql files, once MySQL is up and running and the da_admin user/pass is working correctly, you can run:

cd /usr/local/directadmin/custombuild/mysql_backups
wget http://files.directadmin.com/services/all/mysql/restore_sql_files.sh
chmod 755 restore_sql_files.sh
./restore_sql_files.sh

which restores all User databases.

If you also need to restore the mysql.* tables (usually, you'd avoid doing this unless you've lost all your mysql user/passwords), then you'd call the script like so:

./restore_sql_files.sh with_mysql

which will include the mysql.sql file for the restore, but will end up overwriting the da_admin password, so you may need to reset that if it was changed.

IMPORTANT The DirectAdmin backup/restores are database type/version independent. However, these backups are not always universally interchangeable between databases. The User databases typically are fine, but the mysql tables in mysql.sql vary per version, so worst case is that you might not be able to (easily) restore the mysql.sql file into your database.

MySQL 5.7 uses a different password column name (replaced "password" with "authentication_string") so restoring the mysql.user table wouldn't work, just as one example.

When possible, use the same version of MySQL or MariaDB used to create these sql files.

How to create a full backup via the command line

If your server is on its way to being fully dead, or your license has expired, you can still create backups via the command line. To do so, run the following command:

/usr/local/directadmin/directadmin admin-backup --destination=/home/admin/admin_backups

This will create all backups in /home/admin/admin_backups, assuming there is enough of a system left to do so.

The command to create a backup for specific Users is:

/usr/local/directadmin/directadmin admin-backup --destination=/home/admin/admin_backups --user=testuser1 --user=testuser2

where testuser1 and testuser2 are the accounts you're backing up, by Admin.

To restore a single User, the command is:

echo "action=restore&ip%5Fchoice=file&local%5Fpath=%2Fhome%2Fadmin%2Fadmin%5Fbackups&owner=admin&select%30=user%2Eadmin%2Etestuser%2Etar%2Egz&type=admin&value=multiple&when=now&where=local" >> /usr/local/directadmin/data/task.queue

where user%2Eadmin%2Etestuser%2Etar%2Egz is the name of the file being restored. Replace periods with %2E (hex value). Note that you can also use the testuser%2Etar%2Egz format as well, as either will work. This restore specifies to use the IP stored in the backup file for the restore. If you want to specify the IP to restore him to (assuming his account doesn't exist yet), then you'd set ip_choice=select&ip=1.2.3.4 instead of ip_choice=file.

A sample cron script, ran as root daily on the backup/restore box might be:

#!/bin/sh

#Who is doing the restore?
OWNER=admin
LOCAL_PATH=/home/${OWNER}/admin_backups

#choice can be 'file' to get it from the backup
#or 'select' which will use the ip set.
IP_CHOICE=select
IP=1.2.3.4

echo -n "action=restore&local_path=${LOCAL_PATH}&owner=${OWNER}&when=now&where=local&type=admin";

if [ "${IP_CHOICE}" = "select" ]; then
       echo -n "&ip_choice=select&ip=${IP}";
else
       echo -n "&ip_choice=${IP_CHOICE}";
fi

cd ${LOCAL_PATH}
COUNT=0
for i in `/bin/ls *.gz`; do
{
       echo -n "&select${COUNT}=$i";
       COUNT=$(( $COUNT + 1 ))
};
done;

echo "";

if [ "${COUNT}" -eq 0 ]; then
       exit 1;
fi

exit 0;

Adjust the variables as needed. This will spit out the sample task.queue entry, which you'd dump to the task.queue in your cron, e.g.,

/root/restore_all.sh >> /usr/local/directadmin/data/task.queue

How to check disk usage with a script before running backups

This script is to be used to prevent backups from being created if your disk usage is too high.
This does not work with "System Backup" (it already has its own check).
It works with all 3 Levels of DirectAdmin Backups.

  1. Create the script /usr/local/directadmin/scripts/custom/user_backup_pre.sh

  2. In that script, add the code:

#!/bin/sh

PARTITION=/dev/mapper/VolGroup00-LogVol00
MAXUSED=90

checkfree()
{
        DISKUSED=`df -P $PARTITION | awk '{print $5}' | grep % | cut -d% -f1`
        echo "$DISKUSED < $MAXUSED" | bc
}
if [ `checkfree` -eq 0 ]; then
        echo "$PARTITION disk usage is above $MAXUSED% Aborting backup.";
        exit 1;
fi

exit 0;

Where you'd replace /dev/mapper/VolGroup00-LogVol00 with the filesystem you want to check.

To see the list of filesystem names, type:

df -hP

where the filesystem names are on the far left (partition names on the right).

The MAXUSED value is the percentage threshold of the partition to be used.

  1. Make it executable:
chmod 755 /usr/local/directadmin/scripts/custom/user_backup_pre.sh

Credit for this script: Dmitry Shermanopen in new window

How to backup a user's /home to same server with rsync

This guide will show you how to backup your entire /home, and copy it to /backup/0/home or /home/1/home, depending on the day of the week. This will be useful if you have very large tar.gz backups, or want to greatly reduce the system load backups create (aka: exclude "Domains Directory" and "E-Mail Data").

  1. Create the following /root/rsync.sh script.

  2. In this script, add the code:

#!/bin/sh

BACKUP_SOURCE="/home"

DAY_OF_WEEK=`date +%w`
ZERO_ONE=$(($DAY_OF_WEEK % 2))
BACKUP_DESTINATION="/backup/$ZERO_ONE"

mkdir -p ${BACKUP_DESTINATION}

ionice -c3 nice -n 19 rsync -q -a -W --delete $BACKUP_SOURCE $BACKUP_DESTINATION >/var/log/rsync.log 2>&1

echo `date` > ${BACKUP_DESTINATION}/last_rsync.txt

Note that the ZERO_ONE variable just converts the day of week (0-7) into either a 1 or 2. You can make it do whatever you'd like (e.g., say you want 3 backups, change "% 2" to "% 3", which would give you backup folders 0, 1 and 2).

  1. Make the script executable, and then run it to ensure it works:
cd /root
chmod 700 rsync.sh
./rsync.sh

If you don't want to wait around, you can Ctrl-C, and let cron do it later. However, if you Ctrl-C, just make sure that it did start to copy the files over by checking the backup directory.

  1. To enable automatic calling of the script with cron, type:
crontab -u root -e

and add the code:

30 4 * * 2,5 /root/rsync.sh

which will run on days 2 and 5 of the week (Tuesday and Friday).

*** Make sure your math lines up for the days of week vs the "% 2" value you've picked. Changing "2,5" to "*" will run it every day.

How to use rsync to backup /home to remote server

As webservers get more and more disk space, this means they can hold more data, and as such, the backup process will take longer than before.

For servers where the bulk of the data is stored in email data or public_html data (uploaded by the User), using rsync on /home is a great alternative to including that data into your DA backups.

NOTE: You must restore the DA accounts before doing the rsync, or the DA restore may have errors. You can repeat a DA backup/restore a 2nd time if you want a more updated version, but DA must create accounts first, before doing the rsync. Debian: If you rsync /home/mysql, this means you have to ensure /usr/local/directadmin/conf/mysql.conf is from the old box.

  1. You would still need to create DirectAdmin backups at Admin Level -> Admin Backup/Transfer, however, in Step 4, where you would select the data you want to include, de-select the "Domains Directory" and "E-Mail Data". These 2 items are stored in /home, thus rsync would handle them instead.

  2. Such a root run script could be used to push all /home data over to another remote.hostname.com box with the remote path /home/backupuser/home:

#!/bin/bash
BACKUP_HOST="remote.hostname.com"
BACKUP_USER=`hostname -s`
BACKUP_SOURCE="/home"
BACKUP_DESTINATION="/home/$BACKUP_USER/users"
ionice -c3 nice -n 19 rsync -q -a --delete -e ssh $BACKUP_SOURCE $BACKUP_USER@$BACKUP_HOST:$BACKUP_DESTINATION >/var/log/backup.log 2>&1

NOTE that this also means that doing a restore would require an extra step:

  • restore the DirectAdmin Backup
  • rsync the data back to the restore box, but do remember to adjust the /home/user/domains path to /home/user, for a given User.
  • SquirrelMail and Webmail (Uebimiau, if you use it) are also part of the "E-Mail Data" checkbox. Include /var/www/html/squirrelmail/data and /var/www/html/webmail/tmp, to include webmail settings/data.

How to keep a local copy and remote backup at the same time

Since the backup only allows for local or remote storage, typically, two backup crons are needed. This is less desirable as it doubles the overhead on the box for backups.

The solution is to only create 1 backup cron for FTP, and use the user_backup_post.sh script to copy the file locally before it's uploaded with FTP and deleted.

  1. Create the following file /usr/local/directadmin/scripts/custom/user_backup_post.sh

  2. In that file, add the following code for just admin:

#!/bin/sh

#############
#set this as needed
RESELLER=admin

#where do you want to save the local copy?
SAVE_PATH=/home/$RESELLER/admin_backups
#############

BACKUP_PATH=`echo $file | cut -d/ -f1,2,3,4`
REQUIRED_PATH=/home/tmp/${RESELLER}.

if [[ "$BACKUP_PATH" == ${REQUIRED_PATH}* ]]; then
      NEW_FILE=${SAVE_PATH}/`echo $file | cut -d/ -f6`
      cp -fp $file $NEW_FILE
fi
exit 0;

Or use the following code for all resellers to their own path:

#!/bin/sh

#############
#set this as needed
RESELLER=$reseller

#where do you want to save the local copy?
SAVE_PATH=/home/$RESELLER/user_backups
#############

BACKUP_PATH=`echo $file | cut -d/ -f1,2,3,4`
REQUIRED_PATH=/home/tmp/

if [[ "$BACKUP_PATH" == ${REQUIRED_PATH}* ]]; then
      NEW_FILE=${SAVE_PATH}/`echo $file | cut -d/ -f6`
      cp -fp $file $NEW_FILE
fi
exit 0;

Note, due to PIDs being added to the tmp backup path, the above requires a dot and a wildcard.

  1. And make it executable:
chmod 755 /usr/local/directadmin/scripts/custom/user_backup_post.sh

How to extract and repack a user tar.gz backup file

If a tar.gz is not 100% correct, you may need to extract it and re-compress it in order for DA to read it correctly. Any errors with "tar" will abort the restore process. By re-packing it, you will remove the errors by removing any of the erroneous areas of the file. Note that this may result in the loss of the corrupted data, but will at least help to restore the account.

It's always best to try and recreate the tar.gz file from the source, instead of repairing a broken file. This is to give you the highest chance of restoring all data correctly. Only use this guide as a last resort if the original data does not exist anymore, or can no longer be accessed.

Assumptions:

  • backup file: user.admin.bob.tar.gz
  • backup path: /home/admin/admin_backups/user.admin.bob.tar.gz
  1. Extract the tar.gz:
cd /home/admin/admin_backups
cp user.admin.bob.tar.gz user.admin.bob.tar.gz.backup
mkdir temp
cd temp
tar xvzf ../user.admin.bob.tar.gz
  1. At this point, the backup is extracted, and you can do whatever you need to. Modify files, add/remove parts, etc.

  2. Now that the data is as desired, re-compress it. Make sure you made a backup of the tar.gz in step 1, with the cp command. To re-compress:

cd /home/admin/admin_backups/temp
tar cvzf ../user.admin.bob.tar.gz *

Compressing and uncompressing tar.zst files

Let's assume you want to backup the directory: into file , you can create the tar.zst file with the command:

tar --preserve-permissions --use-compress-program /usr/local/bin/zstdmt -cf backup.tar.zst backup

To extract the file, run:

tar --preserve-permissions --use-compress-program /usr/local/bin/zstdmt -xf backup.tar.zst

mysqldump timeout

Internal default has been added: database_dump_timeout=14400

which allows up to 4 hours for a mysqldump call before DA will send a SIGTERM to kill it. It was reported that corrupted tables can hang mysqldump, thus hanging the entire backup process.

You can disable the timeout by setting

/usr/local/directadmin/directadmin set database_dump_timeout 0

and no internal alterations to signal handlers and alarm timeouts will be made.

MySQL server has gone away during a restore

During a restore, if you get an error similar to:

Unable to restore database user&#db.sql to user_db : ERROR 2006 (HY000) at line 403 in file: 
'/home/user/backups/backup/user_db.sql: MySQL server has gone away

it could mean either:

  1. The server timed-out (8 minute default) and disconnected. Adjusted by the wait_timeout value in the my.cnf.

  2. MySQL could have been restarted mid-restore

  3. or for many large databases, if the max_allowed_packet option, in the [mysqld] section of the /etc/my.cnf is too small, mysqld may abort if it hits a packet larger than that.

If you've installed the my-huge.cnf, it has a default max_allowed_packet value of 1M. Try changing it to 5M or 20M, then restart mysqld and try again.

If you're getting this message it means that DirectAdmin is running with the backup_hard_link_check=1open in new window option, which is enabled by default, and that the client has a hard-link under their account.

The (2) in the subject refers to which area of the backup was running when the link was found. The numbers are 1=~/Maildir, 2=~/imap, 3=backup assembly area before backup starts, and 4=~/domains.

Hard-links are a duplicate inode/pointer to the actual file contents. Either link can be deleted, and the main data is still there (different from a symbolic link, where there is only 1 inode, and the symlink points to the main inode). The danger with hard-links is that it allows a non-privileged User to add new entry point to a sensitive file on the system. However, the ownership and read permissions remain in their original state, so the User still cannot read them. The danger comes if something does read the file in a privileged state, like the backups system if it were to run as root.

Luckily, we don't do that, so as long as you run in the default state of strict_backup_permissions=1open in new window, the backup tar.gz files are created as the User, which cannot read the hard links.

If you don't want to get these notices, but still want to be safe, you can disable the check as long as you keep strict_backup_permissions=1 enabled.

  1. Disable the checks: backup_hard_link_check=0 in the directadmin.conf file.

  2. ENSURE you have strict_backup_permissions=1.

  3. As it is possible for hard-links to be valid under ~/imap, enable the direct_imap_backup=1, which has the added bonus of speeding up the backup process.

  4. And set tar to ignore failed reads to skip files it cannot read and not throw an error via directadmin.conf extra_backup_option:

    extra_backup_option=--warning=no-file-removed --warning=no-file-changed

Common FTP backup errors

Remote FTP might not always provide the most descriptive message, so we have a few tips that might help hunt things down.

curl: (55) Send failure: Connection reset by peer

This error means the remote FTP server disconnected from the sending client. The best way to narrow down the cause is to login to that remote FTP server and check its logs. Some reported causes of this error:

  1. FTP User Over Quota
  2. Firewall is not set up correctly, ensure remote 20, 21, and 35000-35999 are open. Or test by shutting off the firewall to see if that changes anything (and adjust and re-enable after testing).

There are most likely other possible causes, hence the need to check your remote FTP server log for the best info, as the remote server is the side that did the disconnect. Only it would know why.

curl: (55) SSL_write() returned SYSCALL, errno = 110

This client-side FTP likely does not support the minimum required SSL protocols on the server. Update your server's openssl version (often needs a full newer OS), or lower the minimum SSL requirements on the FTP server (not recommended).

curl: (55) Send failure: Connection timed out

The official error text for this error is "Failed sending network data", which could cover several possibilities. The first suspect would be the firewall. Temporarily shutting off the firewall on both ends to quickly confirm if this is the cause. The remote FTP server would need port 21, 20 and the passive port range (DA's FTP servers use 35000-35999). On the client end, ensure those ports can be accessed (try a telnet to port 21 to the remote server, from the local SSH).

curl: (7) Failed to connect to ftp.host.com port 21: Connection refused

Check to ensure that network is up and that port 21 is open on the remote server's firewall.

For many of these, it's often a good idea to also test from a stand-alone client on a P.C., such as FileZilla, to narrow down if the issue is on the FTP server side, or upload/client side. Keep in mind, that an FTP server may restrict connection to specific IPs, so this test many not always be fully accurate to determine the source of an issue.

All related curl error codes:
https://curl.haxx.se/libcurl/c/libcurl-errors.htmlopen in new window

Last Updated: