Is it possible to disable checkers?

When transferring from GDrive encrypted remotes, we’re seeing insanely long delays for restarting a transfer (in the order of 15 minutes).
We’re using:
rclone copy gdrive: ./ --files-from files_list.txt
files_list.txt lists only files that have not been downloaded yet. (We regenerated it before resuming the transfer.)
Can we disable checkers altogether, and have Rclone just assume that files-from is valid, or rely just on the existence of the files?
The goal, basically, would be to avoid polling gdrive for file listings, when resuming a transfer. That listing appears
to be causing the mentioned delay.
I may be totally misunderstanding something, as well.
We’re using Rclone 1.45 and 1.46, and the problem appears to happen wtih both.
I’ve also seen this slow down when uploading a number of files to a large directory.

Any help or explanations are appreciated.

You probably want to use --fast-list and create a log of your command with -vv and share the log and we can figure out what the delay is.

I’m linking logs of uploading 5 mb out of 11 mb, and redownloading them, both with fast-list and without fast-list. Happy to do more debugging as needed.
The uploading is limited to 2 rps so I don’t conflict with anything else I’m running, or hit rate errors.
https://0x0.st/zZi2.zip

Your logs seem fine, but no need for the dump headers as I didn’t ask for that :slight_smile:

You should just use 1.46. You seem to be using some DEV version of 1.45 so no idea what that is.

Transferred:        2.440M / 2.440 MBytes, 100%, 3.642 kBytes/s, ETA 0s
Errors:                 0
Checks:                 0 / 0, -
Transferred:         2559 / 2559, 100%
Elapsed time:    11m26.2s

With Google, you can only create about 3 files per second so lots of small files will take a long time so 2500 files in 10 minutes seems on point.

The downloading is the issue in this case.
In case I was confusing earlier…
If you download files with copy, and you have a list of files using --files-from, it checks all the files first to see if they exist on gdrive before it begins to pull them down.
I want Rclone to start pulling them down first, or pulling them and checking at the same time.
Otherwise, it delays for a long time while the lists finish.
(Currently, the checking does not seem to happen in parallel with the downloading, at least not for the first couple minutes.)
Taken from v1.46:
https://0x0.st/zZ8o.fastlis
And thank you for your help. I really appreciate it.

You need to make and use your own client API/key:

https://rclone.org/drive/#making-your-own-client-id

[felix@gemini ~]$ rclone copy GD: --files-from filesfrom.txt /home/felix/out/ -vv
2019/04/05 14:56:48 DEBUG : rclone: Version "v1.46" starting with parameters ["rclone" "copy" "GD:" "--files-from" "filesfrom.txt" "/home/felix/out/" "-vv"]
2019/04/05 14:56:48 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/04/05 14:56:49 INFO  : Local file system at /home/felix/out: Waiting for checks to finish
2019/04/05 14:56:49 INFO  : Local file system at /home/felix/out: Waiting for transfers to finish
2019/04/05 14:56:50 INFO  : test/file049.bin: Copied (new)
2019/04/05 14:56:50 INFO  :
Transferred:   	   13.396k / 13.396 kBytes, 100%, 6.652 kBytes/s, ETA 0s
Errors:                 0
Checks:                 0 / 0, -
Transferred:            1 / 1, 100%
Elapsed time:          2s

2019/04/05 14:56:50 DEBUG : 4 go routines active
2019/04/05 14:56:50 DEBUG : rclone: Version "v1.46" finishing with parameters ["rclone" "copy" "GD:" "--files-from" "filesfrom.txt" "/home/felix/out/" "-vv"]

What’s actually in your files from file? I tested with a few files in there and works super fast. I kicked it off at the root of my GD: and I have almost 60TB of data.

Are you writing to another Google Drive mount?

2019/04/05 14:39:08 DEBUG : a8/a816830000000000000000000000000000000000000000000000000000000000: Failed to pre-allocate: operation not supported
2019/04/05 14:39:08 INFO  : a8/a816830000000000000000000000000000000000000000000000000000000000: Copied (new)
2019/04/05 14:39:08 DEBUG : a2/a216270000000000000000000000000000000000000000000000000000000000: Failed to pre-allocate: operation not supported

Is this a mount?

2019/04/05 14:36:15 INFO  : Local file system at /home/bmmcginty/gdrivewtf/copy2: Waiting for transfers to finish

I created a test directory for this test. It’s got 256 subdirectories, with 10 files per eachd irectory. (I realize this is odd, but it works well for storing hashed content, for example.) I do have my own client_id configured, as well.

Based on your errors, you aren’t using your own API key as the log is littered with 403 rate limits which means you are not using your own API key.

If you are writing back to a Google Mount drive, it goes back to creating 3 files per second so it takes a long time with a lot of little files.

When you create a key, you need to run through rclone config and add it in.

It’s a local filesystem path.

What’s the file system?

The FS is ext3.
I checked my project via the google console, and I am using my own API key.
It doesn’t make sence, though, that RClone is running all these checks, and _then downloading the files, rather than passing those checked files off to start being downloaded immediately.
It’s like they’re being queued for some indeterminit amount of time or reason.

Can you share you files file? I’m happy to create a bunch of test files and try it out as I can’t reproduce with 30 files in random directories in a large GD.

You can pull the files with --drive-root-folder-id 1LMEvB-pzcrkIg2h2IT9N6qKxi07qmhoe
or get them here (they’ll unzip into your current directory, so you’ll want to be in an empty dir):
0x0.st/zZ8e.zip
Thanks.

also, file used with --files-from is here:
0x0.st/zZKt.txt

I got all the files down, but the txt file seems to not be found for me.

Okay, try:
http://sprunge.us/iZum6M
(I assume your issue was retrieving the --files-from file, which is relinked above.)
Thank you much.

That helps as I can definitely recreate the issue now.

@ncw - I believe relates back to the number of files in the files list as it has over 2000 files and it seems to burst and look for all the files before transferring?

I can see for the files in the from, it seems to do an API hit for each file. If I ramp it up, the list / gets error out like crazy.

@bmmcginty - what’s the use case in having the list? Perhaps there is a different way to achieve it?

Does --max-backlog have any effect if that is set low?

Doesn’t seem to as I tried setting it to 5 and 10.