Is it possible to sync / copy a range of files?

What is the problem you are having with rclone?

Trying to figure out if and in such case how I can perform a partial sync / copy?

What I would like to achieve: i have to sync / copy 1000s of files. Is it possible to tell rclone to take files within a range, say files 1-200. Next time I sync I'd like to specify files 201-400 and so forth.

I am aware of filters, but even after such filters I would have a lot of files to sync / copy, so I'd like to figure out if it's possible to split the sync / copy action into separate tasks. I have searched the docs but I can't seem to find any such parameters.

Run the command 'rclone version' and share the full output of the command.

rclone v1.68.1

Which cloud storage system are you using? (eg Google Drive)

http

The command you were trying to run (eg rclone copy /tmp remote:tmp)

I have been investigating sync and copy commands

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

...

A log from the command that you were trying to run with the -vv flag

...

welcome to the forum,

basically,

  1. get a list of all files using rclone ls src: > list.txt
  2. split list.txt into multiple files, for example, list01.txt, list02.txt
  3. feed each list to rclone - rclone copy src: dst: --files-from=list01.txt

also, can take the output of rclone check and feed that to rclone.

Hi,

Thank you for the prompt reply. I just did the steps above. Step one yields a list of files, in this case in the form:

"[space][space][space][space][space]-1 [filename.ext]". I assume the -1 is a placeholder for the size for the file, which couldn't be resolved.

When I feed the list as is, no files are recognized (list can't be processed correctly). I imagine it was the -1 part so I removed it from a few files and it works as expected. Would there be any way to have the list be generated without the part before the file name?

i meant to write:
rclone lsf src --absolute

Another approach to complete transfer in few steps is using --max-transfer or --max-duration flag. Steps here can be defined either by size or transfer time. I think in most cases it is much more practical than some artificial number of files...

Transferred files order can be tweaked using --order-by flag.

So for example I can run transfer every night which will finish before 8am and will prioritize the newest files first, taking care of older data when time permits.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.