Rclone copy Vs sync for a folder of 1 million + files

what would be the best options(s) in rclone to one-time and periodical sync up of 1+ million files?
Assume two tree cases:

  1. one folder with 1+ million files i.e., tree depth of 1
  2. 1+ million files are distributed recursively with a tree depth of say 7 or more
    Assume 1000 new files will be added on daily bases for both tree cases.
    I understand directory scanning (likely with sync option) will be expensive on such a big long directory both source and dest

Which cloud provider? The advice will depend on that.

This will use a lot of memory of the order of 1GB as rclone has to load the whole directory into ram.

sync scanning can be expensive. However you can use rclone copy --max-age for an efficient sync of new things only.

Cloud provider is oracle cloud infrastructure (OCI) and the rclone is from (nfsv3) File Storage Service (FSS) to Object Storage.

Thank you for the point to note on "use a lot of memory of the order of 1GB".

I will try rclone copy --max-age assuming I would still see high READDIRPLUS nfs ops on the FSS side.

1 Like

Yes it still has to read the local files, but that is usually much quicker than listing the remote files.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.