I am using Rclone in order to sync files from a directory in RHEL8, which is mounted on cloud NAS storage, towards a directory on Google Shared Drive.
What I see is that Rclone is always active as a background process and is
constantly communicating with both Cloud NAS storage and Google, and is using big amount of bandwidth, especially during the day. This is when sync is not working, when things are let's say, idle.
This bandwidth is even higher when SYNC process is started.
During sync, bandwidth utilization is very great and I want to limit this, in both directions.
Run the command 'rclone version' and share the full output of the command.
rclone v1.62.2
os/version: redhat 8.8 (64 bit)
os/kernel: 4.18.0-477.27.1.el8_8.x86_64 (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.20.2
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
rclone sync FOLDER1 FOLDER2
Sync process itslef is working good, scheduled in crontab, but the amount of data is very suspicious as it transfers a lot more to a Gdrive compared to the actual size of data being generated daily on NAS storage.
Before SYNC, I had COPY command instead and this generated even more upload than now.
Please if you can assist me in solving this bandwidth limitation issue on both directions, properly.
You have not posted all details asked... so difficult to guess:)
What is your crontab schedule? As there is no magic - from your description it sounds like you are running your rclone sync all the time or again and again very often.
when sync is not working how you then say that it is using bandwidth? It means that is working:)
If you want to throttle speed of your sync then use --bwlimit flag
Apologies for my misinterpretation, I must admit that I am on junior level for the Linux enviro and no previous experience with rClone. I just tried to follow the official documentation.
My crontab schedule is that every morning at 04:30 sync process of Rclone should start and then to sync the files that were generated in mounted folder, to GD.
Process itself starts but thing is that I see that it works all the time, and for that it uses a big amount of bandwidth which is not good.
When I said sync is not working, I meant that my presumption is that the sync process finished it's job and after that rclone is still using bandwidth. This was my guess.
I wanted to use --bwlimit flag inside the script but nethogs shows me that limit is not enforced and the bandwidth goes really high in both upload and download.
Goal is that:
I want SYNC process not to work all the time, but only to finish it's syncing process which is started in the morning by cron schedule. I don't want Rclone to work at all once sync is done. just to be initiated in the morning, that's all.
I want to enforce bandwidth limitation for Rclone to speeds of 10Mbps of upload and 1Mbps of download.
What I have noticed is that on my Firewall I have registered the outgoing traffic Rclone is making towards IP addresses of Google Gmail servers and Google-Web servers. My guess is that Google-Web are actually servers for Google Drive, but question is why do I have traffic from Rclone towards Gmail servers? Maybe that's a glitch in registering/identifying destination IP address (maybe all IPs are Gdrive ones) but I am not sure.
#!/bin/bash
if [ ! -d /root/gdrive/Network ]
then
echo G-Drive is not mounted. Mounting G-Drive to /root/gdrive folder...
rclone mount cty-backup: /root/gdrive/
echo
fi
if [ -d /root/gdrive/Network ]
then
echo Syncing backup to G-Drive...
rclone sync /mnt/backup/ gdrive/Backup_configs/
echo Sync DONE!
fi
Why don't you sync data directly to you gdrive? You do not need mount for it.
rclone sync /mnt/backup cty-backup:Backup_configs
Run this from command line first and see how long it takes because maybe your sync is very slow and is running all day. How much data is in /mnt/backup directory?
Your crontab job does not have any mechanism in place to prevent overlapping sync jobs from running. You should either use systemd or for crontanb use for example flock.