Sync properly without having to download from the internet everytime

What is the problem you are having with rclone?

I am trying to run a borg backup script to make a local repository and then use rclone to sync with OneDrive. Once an 'initial' backup has been made manually, this script first fetches the entire repo from OneDrive, adds in a new archive for the day, prunes, uploads it back to OneDrive with the additional archive and then deletes local files. When fetching the repo (of ~1 GB), RCLONE does it almost instantaneously (~20s) like it is local file, I've repeatedly run the script several times and ensured that the destination of the sync is empty but the issue is, possibly after a long duration AND restart later, rsync is unable to fetch the repo instantaneously as it previously did possibly because the cache is getting deleted and takes horribly long as if it is downloading all the files from OneDrive. How do I prevent this from happening? I am not totally sure about what is resetting this as the shift is seemingly random. I have tested after ~8hrs of inactivity (in sleep mode) and was able to fetch the repo in seconds. Reboot and run the script again, was able to fetch the repo in seconds. But a combination of these two might be affecting the speed. I have also referred to

and tried to access

How much time should object info (file size, file hashes, etc.) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
Accepted units are: "s", "m", "h".

these settings from

 rclone config

but to no avail, so I hastily added

info_age = 48h

at the end of rclone.conf but that didn't really change anything

Side notes:

  1. Speed of "instantaneous" sync (200-600 MBps )is well beyond my network speed so it is evident that it is happening locally and this is not just a bad network issue (also very less network usage from System Monitor)
  2. Archiving directly to OneDrive with Borg throws a lot of errors, so I've gone through this process of syncing from local directory as suggested here

This is only feasible if I am able to fetch repo in a few seconds.
3. I am able to recreate conditions to fetch instantaneously by deleting both repos (local and drive), initializing a repository and archive locally and uploading to Drive. Deleting local repo and fetching it back from drive is instantaneous. After this point, I can make changes in the backup files and still run the script repeatedly with instant download but this desired outcome is not sustained.
4. First upload (or any upload for that matter) doesn't take too long. First upload taking around 10 min to consequent uploads ranging from near instantaneous to 2-4 minutes
5. Ubuntu and rclone newbie so please be gentle :slight_smile:

What is your rclone version (output from rclone version)

rclone v1.50.2

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 20.04.2 LTS, 64-bit

Which cloud storage system are you using? (eg Google Drive)

OneDrive for business

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync --fast-list --progress "/home/.../OneDrive/Ubuntu Backups/Borg/" /home/.../BorgbackupSync

The rclone config contents with secrets removed.

[one drive]
type = onedrive
token = ***
drive_id = ***
drive_type = business
chunk_size = 10M
info_age = 48h

A log from the command with the -vv flag

Desired outcome:

Are you running an rclone mount as well? Can you post your command line for that please?

I suspect the problems you are having are due to the mount caching.

Are you running the cache backend? Your config says not, but I'm not sure from what you said.

Note that info_age only applies to the cache backend.

This is a relatively old version, I'd recommend the latest stable (you can download and install the .deb from the latest release )

 sh -c "rclone --vfs-cache-mode writes mount \"one drive\":  ~/WA04/OneDrive"  

Thank you for taking time to respond. This command mounts OneDrive on startup and I'll make sure to update it and I'm afraid that's all I know. I do not understand what backend chaching is or how to enable it.

Would adding this solve my issue?

--vfs-cache-max-age duration=48h

If you want the files kept for 48h then yes it would.

You might consider using --vfs-cache-mode full instead of writes which will make sure everything is cached when read which might make things more efficient for you.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.