STOP and READUSE THIS TEMPLATENO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.
What is the problem you are having with rclone?
Latency of downloading a single file from Onedrive is higher than expected. A single small file (200 bytes) takes between 4 and 10 seconds. In particular:
2023/08/19 19:02:41 DEBUG : Creating backend with remote [...]
2023/08/19 19:02:45 DEBUG : fs cache: adding new entry for parent of [...]
I'm curious as to what is taking 4 seconds there, and if I could be doing something better.
Run the command 'rclone version' and share the full output of the command.
Which cloud storage system are you using? (eg Google Drive)
Onedrive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
rclone copy -vvv cryptdrive:testfile .
The rclone config contents with secrets removed.
[cloud]
type = onedrive
region = global
token =
drive_id =
drive_type = personal
[cryptdrive]
type = crypt
remote = cloud:/cryptroot
password =
password2 =
A log from the command with the -vv flag
2023/08/19 19:02:41 DEBUG : Setting --password-command "pass rclone/config" from environment variable RCLONE_PASSWORD_COMMAND="pass rclone/config"
2023/08/19 19:02:41 DEBUG : rclone: Version "v1.63.1" starting with parameters ["rclone" "copy" "-vvv" "cryptdrive:testfile" "."]
2023/08/19 19:02:41 DEBUG : Creating backend with remote "cryptdrive:testfile"
2023/08/19 19:02:41 DEBUG : Using config file from "/home/user/.config/rclone/rclone.conf"
2023/08/19 19:02:41 DEBUG : Creating backend with remote "cloud:/cryptroot/mvn0fap9veqajnmm3r845jac8o/ofkb81uo0f68q1pmh4p0uo34ao/baa26ddl22l4fkoc2a303m3ni8"
2023/08/19 19:02:45 DEBUG : fs cache: adding new entry for parent of "cloud:/cryptroot/mvn0fap9veqajnmm3r845jac8o/ofkb81uo0f68q1pmh4p0uo34ao/baa26ddl22l4fkoc2a303m3ni8", "cloud:cryptroot/mvn0fap9veqajnmm3r845jac8o/ofkb81uo0f68q1pmh4p0uo34ao"
2023/08/19 19:02:45 DEBUG : Creating backend with remote "."
2023/08/19 19:02:45 DEBUG : fs cache: renaming cache item "." to be canonical "/home/user"
2023/08/19 19:02:45 DEBUG : testfile: Need to transfer - File not found at Destination
2023/08/19 19:02:46 DEBUG : testfile.nivicus3.partial: renamed to: testfile
2023/08/19 19:02:46 INFO : testfile: Copied (new)
2023/08/19 19:02:46 INFO :
Transferred: 175 B / 175 B, 100%, 0 B/s, ETA -
Transferred: 1 / 1, 100%
Elapsed time: 4.6s
2023/08/19 19:02:46 DEBUG : 10 go routines active
during that time, rclone makes 7 api calls to onedrive.
2023/08/20 09:29:40 DEBUG : Creating backend with remote "onedrive:crypt/29to8cv750mh9vqcc90hkn3jko/u4arutgi5ht4nrrfgpm0qppbng"
2023/08/20 09:29:40 DEBUG : GET /v1.0/drives/d25f196fd258a305/root HTTP/1.1
2023/08/20 09:29:40 DEBUG : GET /v1.0/drives/d25f196fd258a305/items/D25F196FD258A305!107:/crypt01: HTTP/1.1
2023/08/20 09:29:40 DEBUG : GET /v1.0/drives/d25f196fd258a305/items/D25F196FD258A305!87084:/29to8cv750mh9vqcc90hkn3jko: HTTP/1.1
2023/08/20 09:29:40 DEBUG : GET /v1.0/drives/d25f196fd258a305/items/D25F196FD258A305!87100:/u4arutgi5ht4nrrfgpm0qppbng: HTTP/1.1
2023/08/20 09:29:41 DEBUG : GET /v1.0/drives/d25f196fd258a305/items/D25F196FD258A305!107:/crypt01: HTTP/1.1
2023/08/20 09:29:41 DEBUG : GET /v1.0/drives/d25f196fd258a305/items/D25F196FD258A305!87084:/29to8cv750mh9vqcc90hkn3jko: HTTP/1.1
2023/08/20 09:29:41 DEBUG : GET /v1.0/drives/d25f196fd258a305/items/D25F196FD258A305!87100:/u4arutgi5ht4nrrfgpm0qppbng: HTTP/1.1
2023/08/20 09:29:41 DEBUG : fs cache: adding new entry for parent of "onedrive:crypt/29to8cv750mh9vqcc90hkn3jko/u4arutgi5ht4nrrfgpm0qppbng", "onedrive:crypt/29to8cv750mh9vqcc90hkn3jko"
interesting, notice that that rclone repeats this sequence twice
GET /v1.0/drives/d25f196fd258a305/items/D25F196FD258A305!107:/crypt: HTTP/1.1
GET /v1.0/drives/d25f196fd258a305/items/D25F196FD258A305!87084:/29to8cv750mh9vqcc90hkn3jko: HTTP/1.1
GET /v1.0/drives/d25f196fd258a305/items/D25F196FD258A305!87100:/u4arutgi5ht4nrrfgpm0qppbng: HTTP/1.1
Thanks @asdffdsa. I created my own client ID/secret, and measured the latency on the bare backend (without crypt), and it doesn't seem to have changed significantly. That takes 1.4--1.8 sec to do a small file copy.
As you pointed out, the crypt backend repeats its API calls, and I assume this is what is leading to the 4.5 seconds or so on the crypt backend. A couple questions:
How do I see the DEBUG lines you posted above? -vv doesn't show the API calls
Any idea on why the crypt backend repeats those calls? Is that something that could be reduced?
yeah, that seems to be the case for most fellow rcloners, but it is a good starting point for testing.
--dump=headers
no idea, never noticed that repeat until now.
AFAIK, crypt is not the cause of the repeat calls.
run this command on onedrive itself, not the crypt. rclone copy cloud:testfile . -vv --dump=headers --dry-run
hey, now that you know about --dump, you can do more testing?
edit: hmmm, onedrive seems to repeat but S3 does not?
# onedrive
rclone copy onedrive:zork/file.ext . -vv --dump=headers --dry-run
2023/08/20 16:21:36 DEBUG : Creating backend with remote "onedrive:zork/file.ext"
2023/08/20 16:21:37 DEBUG : GET /v1.0/drives/d25f196fd258/items/d25f196fd258!107:/zork: HTTP/1.1
2023/08/20 16:21:37 DEBUG : GET /v1.0/drives/d25f196fd258/items/d25f196fd258!87101:/file.ext: HTTP/1.1
2023/08/20 16:21:37 DEBUG : GET /v1.0/drives/d25f196fd258/items/d25f196fd258!107:/zork: HTTP/1.1
2023/08/20 16:21:38 DEBUG : GET /v1.0/drives/d25f196fd258/items/d25f196fd258!87101:/file.ext: HTTP/1.1
2023/08/20 16:21:38 DEBUG : fs cache: adding new entry for parent of "onedrive:zork/file.ext", "onedrive:zork"
# S3 clone wasabi
rclone copy wasabi:zork/file.ext . -vv --dump=headers --dry-run
2023/08/20 16:28:12 DEBUG : Creating backend with remote "wasabi:zork/file.ext"
2023/08/20 16:28:12 DEBUG : HEAD /file.ext HTTP/1.1
2023/08/20 16:28:13 DEBUG : fs cache: adding new entry for parent of "wasabi:zork/file.ext", "wasabi:zork"
I haven't dug deeper into why onedrive requires more API calls and ends up being slower. I'd be curious, but for my purposes, I've moved to Dropbox for now, and it's made a big difference. Thanks a bunch for the help with this, @asdffdsa!