How many files is "small number of files" with regards to --no-traverse

What is the problem you are having with rclone?

No problem, looking for usage guidelines

What is your rclone version (output from rclone version)

1.53.1

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Debin 64-bit

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

I am trying to optimize the moving of files of different sizes (jpg's and videos) to Gdrive storage. I've settled on doing this in 2 passes, one to cover the the many smaller jpgs, where I will have multiple transfers at once, and one to cover the larger videos which I will do one at a time.

The documentation of --no-traverse doesn't call out a number, so it's unclear to me how to make the decision s to what a "small number of files" is, as that's a highly subjective appraisal. So I'm looking for guidance there to determine if I should be using it.

And while I'm at it, I've gathered if I'm reasonably certain (which I am) that the files being copied don't exist on the destination, I should probably use --no-check-dest as well? Would using --no-check-dest negate any need for --no-traverse?

Thanks!

Rclone novice here but have read enough to understand this forum topic applies:

Assuming you have liberal bandwidth I suggest:

  • Do big the video files first
  • Manage small file upload order to meet requirements

how many files are you moving to gdrive?
is this a one-time move or do you plan to run that move command in the future?

i think that the limiting factor is https://rclone.org/drive/#limitations
"limited to transferring about 2 files per second only"

It depends on the backend. My advice is don't use it ever on Google drive, the drive servers hate it!

Ok, that's good to know! It might be useful, in the future, to have a section in the documentation about remote optimization/best practices...something, for information like this.

--fast-list will help later in API rate limited cases.

The number of files varies, and I will be repeating it. I have it scripted out, and realized yesterday that one run seemed to be taking particularly long, and I had a bunch of jpg's that in the grouping.

You know, now that you mention that 2-files per second thing, I do vaguely recall that...that said, I've been (in my forgetful ignorance regarding that) generally running --transfers 3 for moving jpg files, and I haven't seen any rate limiting that I can recall...but I suspect it's probably averaging out to only about 2 files at any given time, so I'm probably not hitting the limit there as a result. I think I arrived at that experimentally, I think I do hit that at 4-5 simultaneous.

Even transfers 2 is going to be way more effective, the scale up for these small file operations just to 2 or 3 feels at least linear to me.

Thanks!

This is how I do it now, but typically I had seperated out the small and large file folders. One slipped through, and I realized the 1-at-a-time on the samll files was really dragging it down. So I decided to re-work it a little, and started wondering about --no-traverse in the process.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.