Hi! Please to meet you I'm the lead developer for rclone.
My experience with Zoho workdrive is that it is almost unusable with rclone due to rate limiting at the moment, so I can't say I'm surprised that you are hearing from users about it.
I had to disable most of the integration tests for Zoho recently because rclone was being throttled very badly.
I've been wanting a technical contact at Zoho for ages, but all my attempts to email people have been ignored!
This sounds like the pagination might be broken in rclone.
Is there any chance you could upgrade my account ( nick@craig-wood.com ) to to have more API calls available - this is impossible to debug as I run out of API calls almost immediately.
(BTW where are your docs on API limits for workdrive - I spent about 15 minutes searching for them yesterday and I couldn't find them!)
Here is me trying to upload 50 small files to have a directory to test this on.
$ rclone copy -vv 50files TestZoho:
2023/11/23 16:11:27 DEBUG : rclone: Version "v1.65.0-beta.7527.5c9cfbbc8.fix-completion" starting with parameters ["rclone" "copy" "-vv" "50files" "TestZoho:"]
2023/11/23 16:11:27 DEBUG : Creating backend with remote "50files"
2023/11/23 16:11:27 DEBUG : Using config file from "/home/ncw/.rclone.conf"
2023/11/23 16:11:27 DEBUG : fs cache: renaming cache item "50files" to be canonical "/tmp/50files"
2023/11/23 16:11:27 DEBUG : Creating backend with remote "TestZoho:"
2023/11/23 16:11:27 DEBUG : 01.txt: Need to transfer - File not found at Destination
[snip]
2023/11/23 16:11:27 DEBUG : 50.txt: Need to transfer - File not found at Destination
2023/11/23 16:11:27 DEBUG : zoho root '': Waiting for checks to finish
2023/11/23 16:11:27 DEBUG : zoho root '': Waiting for transfers to finish
2023/11/23 16:11:29 INFO : 01.txt: Copied (new)
2023/11/23 16:11:29 INFO : 03.txt: Copied (new)
2023/11/23 16:11:29 INFO : 04.txt: Copied (new)
2023/11/23 16:11:29 INFO : 02.txt: Copied (new)
2023/11/23 16:11:30 INFO : 05.txt: Copied (new)
2023/11/23 16:11:30 INFO : 08.txt: Copied (new)
2023/11/23 16:11:30 INFO : 07.txt: Copied (new)
2023/11/23 16:11:30 INFO : 06.txt: Copied (new)
2023/11/23 16:11:31 INFO : 09.txt: Copied (new)
2023/11/23 16:11:31 INFO : 10.txt: Copied (new)
2023/11/23 16:11:32 INFO : 11.txt: Copied (new)
2023/11/23 16:11:32 INFO : 12.txt: Copied (new)
2023/11/23 16:11:32 INFO : 13.txt: Copied (new)
2023/11/23 16:11:33 INFO : 14.txt: Copied (new)
2023/11/23 16:11:33 INFO : 15.txt: Copied (new)
2023/11/23 16:11:33 INFO : 16.txt: Copied (new)
2023/11/23 16:11:33 DEBUG : pacer: low level retry 1/10 (error HTTP error 429 (429 Too Many Requests) returned body: "{\"errors\":[{\"id\":\"F7008\",\"title\":\"Url throttles limit exceeded\"}]}\n")
2023/11/23 16:11:33 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2023/11/23 16:11:33 DEBUG : pacer: low level retry 2/10 (error HTTP error 429 (429 Too Many Requests) returned body: "{\"errors\":[{\"id\":\"F7008\",\"title\":\"Url throttles limit exceeded\"}]}\n")
2023/11/23 16:11:33 DEBUG : pacer: Rate limited, increasing sleep to 40ms
2023/11/23 16:11:33 DEBUG : pacer: low level retry 3/10 (error HTTP error 429 (429 Too Many Requests) returned body: "{\"errors\":[{\"id\":\"F7008\",\"title\":\"Url throttles limit exceeded\"}]}\n")
2023/11/23 16:11:33 DEBUG : pacer: Rate limited, increasing sleep to 80ms
2023/11/23 16:11:33 DEBUG : pacer: Reducing sleep to 60ms
2023/11/23 16:11:33 DEBUG : pacer: low level retry 4/10 (error HTTP error 429 (429 Too Many Requests) returned body: "{\"errors\":[{\"id\":\"F7008\",\"title\":\"Url throttles limit exceeded\"}]}\n")
[snip]
So 16 files copied then - rate limit - and nothing works after that!
after waiting some time for rate limits to reset
I managed to get the 50 files uploaded then I can see the behaviour of listing a directory. I'm using --tpslimit 1
here to limit rclone to one transaction per second to try not to trip the rate limiting.
$ rclone --tpslimit 1 lsf TestZoho: -vv --dump headers 2>&1 | grep GET
2023/11/23 16:26:14 DEBUG : GET /api/v1/files/4xrzu9ece9a669c3c4d38a009ca405a67cce9/files?page%5Blimit%5D=10&page%5Boffset%5D=0 HTTP/1.1
2023/11/23 16:26:15 DEBUG : GET /api/v1/files/4xrzu9ece9a669c3c4d38a009ca405a67cce9/files?page%5Blimit%5D=10&page%5Boffset%5D=10 HTTP/1.1
2023/11/23 16:26:16 DEBUG : GET /api/v1/files/4xrzu9ece9a669c3c4d38a009ca405a67cce9/files?page%5Blimit%5D=10&page%5Boffset%5D=20 HTTP/1.1
2023/11/23 16:26:17 DEBUG : GET /api/v1/files/4xrzu9ece9a669c3c4d38a009ca405a67cce9/files?page%5Blimit%5D=10&page%5Boffset%5D=30 HTTP/1.1
2023/11/23 16:26:18 DEBUG : GET /api/v1/files/4xrzu9ece9a669c3c4d38a009ca405a67cce9/files?page%5Blimit%5D=10&page%5Boffset%5D=40 HTTP/1.1
2023/11/23 16:26:19 DEBUG : GET /api/v1/files/4xrzu9ece9a669c3c4d38a009ca405a67cce9/files?page%5Blimit%5D=10&page%5Boffset%5D=50 HTTP/1.1
2023/11/23 16:26:20 DEBUG : GET /api/v1/files/4xrzu9ece9a669c3c4d38a009ca405a67cce9/files?page%5Blimit%5D=10&page%5Boffset%5D=60 HTTP/1.1
So it appears that rclone is doing the paging properly - the offset is changing. So if there is a bug it isn't here.
Using pages of only 10 items seems very small though - I see the maximum in the API docs is 50 - should we switch to that?
I suspect what is happening here is that zoho is returning a 429 error and rclone is retrying the request - can you see that in your logs?
For the zoho backend we use these parameters for the pacing and retry
This means that we will send requests no quicker than once every 10ms and when we have a failure we will double that to 20ms before retrying and keep doubling it until we get to 2s.
It might be that that is too agressive for zoho workdrive - perhaps it should backoff straight to 1s on errors and increase exponentially to 16s say.
Also the one request every 10ms seems very optimistic - what would you suggest is a better value here?
It would be great to collaborate to get rclone working better with Zoho Workdrive. I'm happy to do that here, or you can email me at nick@craig-wood.com if you prefer.