Is --drive-chunk-size taken into account from the config file?

What is the problem you are having with rclone?

I already have chunk_size set in rclone.conf, but when I specify it once again on the command line as --drive-chunk-size, I get drastically increased transfer rates. Is this really taken from the config file? How can I tell?

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.1

  • os/version: oracle 6.10 (64 bit)
  • os/kernel: 4.1.12-124.48.6.el6uek.x86_64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.9
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

crypt on top of Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone --fast-list --transfers=20 --checkers=40 --tpslimit-burst 20 -P --drive-chunk-size=128M sync dir archive:dir

has upload transfer rates of up to as much as my link can stand (cca 10+ MiBps as I have 1000Mbps uplink)

Transferred:      992.327 MiB / 17.137 GiB, 6%, 9.281 MiB/s, ETA 29m43s

as opposed to

rclone --fast-list --transfers=20 --checkers=40 --tpslimit-burst 20  -P sync dir archive:dir

which has never gone over 2-3MiBps, but mostly stays under 1

Transferred:        2.156 MiB / 10.307 GiB, 0%, 105.024 KiB/s, ETA 1d4h34m44s

The rclone config contents with secrets removed.

[gdrive]
type = drive
client_id = 
client_secret = 
scope = drive
token = 
team_drive = ...
root_folder_id = ...
use_trash = true
chunk_size = 128M
acknowledge_abuse = true
server_side_across_configs = true
stop_on_upload_limit = true
stop_on_download_limit = true
transfers = 8

[archive]
type = crypt
remote = gdrive:
password = 
password2 = 
server_side_across_configs = true
filename_encryption = off
directory_name_encryption = false

A log from the command with the -vv flag

# rclone -vv  --fast-list --transfers=20 --checkers=40 --tpslimit-burst 20 --drive-chunk-size=128M sync -P logs/ archive:logs
2022/05/04 14:13:30 DEBUG : rclone: Version "v1.58.1" starting with parameters ["rclone" "-vv" "--fast-list" "--transfers=20" "--checkers=40" "--tpslimit-burst" "20" "--drive-chunk-size=128M" "sync" "-P" "dir/" "archive:dir"]
2022/05/04 14:13:30 DEBUG : Creating backend with remote "logs/"
2022/05/04 14:13:30 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2022/05/04 14:13:30 DEBUG : fs cache: renaming cache item "logs/" to be canonical "/backup/archive/logs"
2022/05/04 14:13:30 DEBUG : Creating backend with remote "archive:logs"
2022/05/04 14:13:30 DEBUG : Creating backend with remote "gdrive:logs.bin"
2022/05/04 14:13:30 DEBUG : gdrive: detected overridden config - adding "{OgfZc}" suffix to name
2022/05/04 14:13:31 DEBUG : fs cache: renaming cache item "gdrive:logs.bin" to be canonical "gdrive{OgfZc}:logs.bin"
2022/05/04 14:13:31 DEBUG : Creating backend with remote "gdrive:logs"
2022/05/04 14:13:31 DEBUG : gdrive: detected overridden config - adding "{OgfZc}" suffix to name
2022/05/04 14:13:31 DEBUG : fs cache: renaming cache item "gdrive:logs" to be canonical "gdrive{OgfZc}:logs"

You have quite the amount of checkers and transfers going on so it's a bit hard to tell what could be slowing you down.

chunk_size works fine from the config file as that's for uploading only.

There's only a little snippet of a log so hard to see what's going on with the transfers as you only have a snippet of the progress output as well.

Google limits creation of files so it's really bad for small files as you can only create 1-3 per second so nothing you do can change/fix that.

A full debug log would shed details though.

yeah, I am indeed transferring a whole bunch of small files, but some big ones too, its a mix of logs

the transfer rate goes down when it stumbles upon a bunch of small files, but it also goes up when it gets to big ones. I've been running without --drive-chunk-size for hours and have never seen the transfer rate go nowhere near the transfer rates with the option, its a crazy coincidence, if it is a coincidence.

my question was actually: is there a way to display somehow what is the end configuration for a remote, including all the ways one can override the config, rclone.conf, env variables, cmdline options and whatnot

in the -vvv log i can see that the config was overridden, but not exactly what was overridden and with what

If I'm not mistaken, anything from an environment variable/config happens first and then it applies the command line flags you specify.

So in my example, you can see I ran with the flag 16M for my chunk size which overrides my environment/config file as that 'wins'.

felix@gemini:~$ rclone copy /etc/hosts DB: -vvv --dropbox-chunk-size 16M
2022/05/04 11:10:18 DEBUG : Setting --config "/opt/rclone/rclone.conf" from environment variable RCLONE_CONFIG="/opt/rclone/rclone.conf"
2022/05/04 11:10:18 DEBUG : rclone: Version "v1.58.1" starting with parameters ["rclone" "copy" "/etc/hosts" "DB:" "-vvv" "--dropbox-chunk-size" "16M"]
2022/05/04 11:10:18 DEBUG : Creating backend with remote "/etc/hosts"
2022/05/04 11:10:18 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2022/05/04 11:10:18 DEBUG : fs cache: adding new entry for parent of "/etc/hosts", "/etc"
2022/05/04 11:10:18 DEBUG : Creating backend with remote "DB:"
2022/05/04 11:10:18 DEBUG : DB: detected overridden config - adding "{QEqRK}" suffix to name
2022/05/04 11:10:18 DEBUG : fs cache: renaming cache item "DB:" to be canonical "DB{QEqRK}:"
2022/05/04 11:10:18 DEBUG : hosts: Need to transfer - File not found at Destination
2022/05/04 11:10:19 DEBUG : hosts: Uploading chunk 1/1
2022/05/04 11:10:20 DEBUG : hosts: Uploading chunk 2/1
2022/05/04 11:10:20 DEBUG : Dropbox root '': Adding "/hosts" to batch
2022/05/04 11:10:21 DEBUG : Dropbox root '': Batch idle for 500ms so committing
2022/05/04 11:10:21 DEBUG : Dropbox root '': Committing sync batch length 1 starting with: /hosts
2022/05/04 11:10:22 DEBUG : Dropbox root '': Upload batch completed in 142.478935ms
2022/05/04 11:10:22 DEBUG : Dropbox root '': Committed sync batch length 1 starting with: /hosts
2022/05/04 11:10:22 DEBUG : hosts: dropbox = 36600f2d623ef48807551ee091ef25a9563094245b344b55654c77361add095b OK
2022/05/04 11:10:22 INFO  : hosts: Copied (new)
2022/05/04 11:10:22 INFO  :
Transferred:   	        236 B / 236 B, 100%, 78 B/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:         3.9s

2022/05/04 11:10:22 DEBUG : 10 go routines active
2022/05/04 11:10:22 INFO  : Dropbox root '': Commiting uploads - please wait...

Your config file with the value you is fine so no need to specify it on the command line as well. The overridden part is a flag you specified has overrode some value in your config file or environment variables. So if I remove the flag, the override part goes away.

felix@gemini:~$ rclone delete DB:hosts
felix@gemini:~$ rclone copy /etc/hosts DB: -vvv
2022/05/04 11:12:24 DEBUG : Setting --config "/opt/rclone/rclone.conf" from environment variable RCLONE_CONFIG="/opt/rclone/rclone.conf"
2022/05/04 11:12:24 DEBUG : rclone: Version "v1.58.1" starting with parameters ["rclone" "copy" "/etc/hosts" "DB:" "-vvv"]
2022/05/04 11:12:24 DEBUG : Creating backend with remote "/etc/hosts"
2022/05/04 11:12:24 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2022/05/04 11:12:24 DEBUG : fs cache: adding new entry for parent of "/etc/hosts", "/etc"
2022/05/04 11:12:24 DEBUG : Creating backend with remote "DB:"
2022/05/04 11:12:24 DEBUG : hosts: Need to transfer - File not found at Destination
2022/05/04 11:12:25 DEBUG : hosts: Uploading chunk 1/1
2022/05/04 11:12:26 DEBUG : hosts: Uploading chunk 2/1
2022/05/04 11:12:26 DEBUG : Dropbox root '': Adding "/hosts" to batch
2022/05/04 11:12:27 DEBUG : Dropbox root '': Batch idle for 500ms so committing
2022/05/04 11:12:27 DEBUG : Dropbox root '': Committing sync batch length 1 starting with: /hosts
2022/05/04 11:12:28 DEBUG : Dropbox root '': Upload batch completed in 150.108357ms
2022/05/04 11:12:28 DEBUG : Dropbox root '': Committed sync batch length 1 starting with: /hosts
2022/05/04 11:12:28 DEBUG : hosts: dropbox = 36600f2d623ef48807551ee091ef25a9563094245b344b55654c77361add095b OK
2022/05/04 11:12:28 INFO  : hosts: Copied (new)
2022/05/04 11:12:28 INFO  :
Transferred:   	        236 B / 236 B, 100%, 78 B/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:         3.7s

2022/05/04 11:12:28 DEBUG : 10 go routines active
2022/05/04 11:12:28 INFO  : Dropbox root '': Commiting uploads - please wait...

yes, I'm aware how that works according to the documentation, but documentation is one thing, and software is another :wink:

how are you sure what overwrote what and with what exactly?

You can dump headers/requests if you want to inspect the transactions to see what's being passed specifically.

--dump headers,requests

I used Google Drive/rclone for quite some time so I'm very sure it works as I did quite a bit of testing with that particular flag before I moved on to Dropbox.

Again, you've shared a snippet of a log that doesn't say much, if you are moving lots of small files, Google stinks for that. You have a huge number of transfers/checkers which is probably slowing you down, but again, without sharing a full debug log, I can't be sure of what your issue is as that's why we ask for it so we have all the details and don't have to guess.

how shall I create the logs?

I used rclone --log-level DEBUG --log-file /tmp/rclone-without-cmdline.log, but there is nothing interesting in there, just a bunch of

filename: : md5 = 600373a637dc6dd70a4c4018046532ab OK
filename: Copied (new)

and not much else, except at the beginning when it is figuring out what to copy, where it is full of

filename: Size and modification time the same (differ by 0s, within tolerance 1ms)
filename: Unchanged skipping

I am a bit hesitant about sharing the logs as I'll have to anonymize the filenames as they have quite a bit of metadata in them that I'm not supposed to share

Sorry as I had a much better idea on specifically seeing it that just requires a debug.

My config file has 256M in it.

felix@gemini:/data$ rclone copy jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv GD: -vvv
2022/05/04 12:56:28 DEBUG : Setting --config "/opt/rclone/rclone.conf" from environment variable RCLONE_CONFIG="/opt/rclone/rclone.conf"
2022/05/04 12:56:28 DEBUG : rclone: Version "v1.58.1" starting with parameters ["rclone" "copy" "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv" "GD:" "-vvv"]
2022/05/04 12:56:28 DEBUG : Creating backend with remote "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv"
2022/05/04 12:56:28 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2022/05/04 12:56:28 DEBUG : fs cache: adding new entry for parent of "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv", "/data"
2022/05/04 12:56:28 DEBUG : Creating backend with remote "GD:"
2022/05/04 12:56:28 DEBUG : Google drive root '': 'root_folder_id = 0AGoj85v3xeadUk9PVA' - save this in the config to speed up startup
2022/05/04 12:56:28 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: Need to transfer - File not found at Destination
2022/05/04 12:56:28 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: Sending chunk 0 length 268435456
2022/05/04 12:56:32 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: Sending chunk 268435456 length 268435456

You can see the length is 256M.

If I override it to 1G, you can see it's now 1G:

felix@gemini:/data$ rclone copy jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv GD: -vvv --drive-chunk-size 1G
2022/05/04 12:56:48 DEBUG : Setting --config "/opt/rclone/rclone.conf" from environment variable RCLONE_CONFIG="/opt/rclone/rclone.conf"
2022/05/04 12:56:48 DEBUG : rclone: Version "v1.58.1" starting with parameters ["rclone" "copy" "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv" "GD:" "-vvv" "--drive-chunk-size" "1G"]
2022/05/04 12:56:48 DEBUG : Creating backend with remote "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv"
2022/05/04 12:56:48 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2022/05/04 12:56:48 DEBUG : fs cache: adding new entry for parent of "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv", "/data"
2022/05/04 12:56:48 DEBUG : Creating backend with remote "GD:"
2022/05/04 12:56:48 DEBUG : GD: detected overridden config - adding "{kysdd}" suffix to name
2022/05/04 12:56:48 DEBUG : Google drive root '': 'root_folder_id = 0AGoj85v3xeadUk9PVA' - save this in the config to speed up startup
2022/05/04 12:56:48 DEBUG : fs cache: renaming cache item "GD:" to be canonical "GD{kysdd}:"
2022/05/04 12:56:48 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: Need to transfer - File not found at Destination
2022/05/04 12:56:48 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: Sending chunk 0 length 1073741824

That's a much cleaner way to check but does require debug.

So pick one file and validate rather than doing a huge amount through debug.

thanks, yes, looks like it was just a super coincidence that every time I would put the cmdline parameter in, it would pick bigger files and go faster :slight_smile:

Glad we got it answered.

I always go back to:

Ice Cream Causes Polio ( Real World ) | Earth Science | CK-12 Foundation (ck12.org)

On of my favorite stories :slight_smile:

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.