Rclone move -max-transfer

I have few questions about the new rclone max-transfer.

  1. Would this setup work?
    I have rclone mount as systemd service without max-transfer because i want unlimited data on downloading my files from Google drive. Separately I run rclone move once a day with -max-transfer flag of 26G. So does this mean that once the rclone move command reach 26G of data upload from the time I executed the command the command will exit? What happen to my mount, will that affected in any way?
    Or is 26G limit is based on one day transfer as in rclone checking the day?

  2. Preliminary test of rclone move with -max-transfer flag resulted weird…
    Rclone move itself was ok but during this time, my rclone mount behave rather strangely, I’m using vfs cache mode with below params and rclone doesn’t seem to fetch the next chunk of data to download resulting in abrupt quit while playing a tv show in Plex. There was no error on rclone log but I can see from netdata and Plex playback that rclone doesn’t download the buffer as it suppose to.
    /usr/sbin/rclone mount GDrive1: /mnt/Plexdrive \ --config=/home/aaa/.config/rclone/rclone.conf \ --vfs-cache-mode writes \ --no-checksum \ --no-modtime \ --attr-timeout=4s \ --allow-other \ --allow-non-empty \ --dir-cache-time=4h \ --vfs-cache-max-age=24h \ --vfs-read-chunk-size=4M \ --buffer-size=2M --uid=111 --gid=118 --umask 000 --syslog

  3. Unrelated question, ‘Duplicate object found in destination - ignoring’ what does this rclone message actually mean?
    Does this mean that rclone find duplicate objects found in **both local and remote ** thus ignoring it as in not uploading the data from local to remote? If yes, then it’s a bug because i receive a lot of this messages and i certainly don’t have duplicates between local and remote…
    Or does it mean that rclone find duplicate both located in remote?

The --max-transfer works on a per invocation of rclone - it doesn’t keep a global records. So it won’t affect your mount.

Perhaps network contention? If you limit the upload bandwidth on the move with --bwlimit does that help?

On google drive you can have two objects with the same name. Unlike on your local disk. Use rclone dedupe to sort out the duplicates.

This could be it. I’ll try bwlimit flags next time. But do you think that setting -vfs-read-chunk-size=4M could also add to the network problem? The situation got bad that it made my overall home network slow, perhaps to many requests at a short time? As the value of setting between my rclone config, rclone mount, and rclone move different I’m confused which value is actually being prioritized…

I run a very modest server with cpu Intel Celeron j1900 and total power consumption of around 9 watt with quite huge library of around 75TB of a lot of medium size files. That vfs chunk setting keeps rclone very light and almost no load to my cpu, while at the same time providing very fast Plex scan, analysis and near instant Plex movie playback. However every time i use rclone for any other than mount, after few hours in general the overall networking becomes slow. Yet there’s no error in rclone nor Google console dashboard other than few 500 or 403.

Could rclone dedupe falsely delete duplicate files?
My situation is that i have a lot of shared folders owned by other people. I’m in the middle of copying them into my own Google drive thus having the file ownership changed into mine as i copy them. I’m afraid that if i use dedupe rclone works delete these files and left me with no copies other than the files that is owned by other people. Or is dedupe smart enough to only recognize duplicates only owned by the same user account?

This sounds like rclone move using all your upload bandwidth (which it will in order to send the files as quickly as possible). --bwlimit will fix this. I have the same problem with my home network too. If you use all the upload bandwidth then the ACKs for the downloads slow down and the downloads slow down too.

How much up and down bandwidth do you have?

rclone dedupe is interactive by default. If it finds an exact duplicate size and md5sum it will delete it without asking, otherwise it will ask which dupe you want to keep.

Dedupe does not take account of ownership of the files.

Try running with --dry-run to see what it would do.

From ISP I should be getting 350Mbps down, 13Mbps up, but real speed I only get 250Mbps down, 12Mbps up.

I know that rclone use all my 12Mbps up, but I’m surprised it affects my home network where most of my devices are wired or have very strong Wifi signal. Either this or it was a problem from upstream ISP as I received an apology email from my ISP.

Well, so far everything works perfectly. Thank you so much for making great software !

1 Like

if you’re using all your 12Mbps upstream then you downloads will be slow, maybe even grinding to a halt. I would limit your upload to less then 9Mbps so you have enough upstream left for ACK packets - these are upload packets that tell your download source to keep sending more data as the previous ones were received ok.

1 Like