Dropbox performance error

i have performance issue with dropbox sometimes freeze when i open a folder. And emby libery scans during long. I hope you can help. Im using shared folder dropbox but i have only acces or should use the normaly folder where only i have acces its makes differenze ?
mount : ExecStart=/usr/bin/rclone mount xxxxxt: /xxxxx/xxxx --allow-other --dir-cache-time 9999h --log-file /media/rclone/rclone.log --log-level NOTICE --umask 002 --cache-dir=xxx--vfs-cache-mode full --vfs-cache-max-size 250G --vfs-fast-fingerprint --vfs-write-back 1h --vfs-cache-max-age 9999h --tpslimit 12 --tpslimit-burst 12

replace it with:

--tpslimit 12 --tpslimit-burst 0

1 Like

not working better i have now this mount :

--allow-other --dir-cache-time 9999h --log-file /media/rclone/rclone.log --log-level NOTICE --umask 002 --cache-dir=/xxxxx/xxxx/xxxx/ --vfs-cache-mode full --vfs-cache-max-size 250G --vfs-write-back 1h --vfs-cache-max-age 24h --tpslimit 12 --tpslimit-burst 0 --disable-http2 --attr-timeout 1s --vfs-read-chunk-size 10M --dropbox-chunk-size 120M

i cant message you @Animosity022 maybe you can give me a answer for my question about dropbox team shared folder or normal foulder what i should use ?

Best bet would be to use the help and support template and fill out the entire thing.

Please show the effort you've put in to solving the problem and please be specific -- people are volunteering their time to help you! Low effort posts are not likely to get good answers! DO NOT REDACT any information except passwords/keys/personal info. You should use 3 backticks to begin and end your paste to make it readable. Or use a service such as https://pastebin.com or https://gist.github.com/ -->

What is the problem you are having with rclone?

performance is not good folder when i opening folders in windows/linux.

Run the command 'rclone version' and share the full output of the command.

rclone v1.63.0

  • os/version: ubuntu 22.04 (64 bit)
  • os/kernel: 6.4.0-060400-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.20.5
  • go/linking: static
  • go/tags: none

Are you on the latest version of rclone? You can validate by checking the version listed here: Rclone downloads
-->

Which cloud storage system are you using? (eg Google Drive)

dropbox

The command you were trying to run (eg rclone copy /tmp remote:tmp)

/usr/bin/rclone mount xxxxx: /media/drive --allow-other --dir-cache-time 9999h --log-file /media/rclone/rclone.log --log-level NOTICE --umask 002 --cache-dir=/media/data/cache/ --vfs-cache-mode full --vfs-cache-max-size 250G --vfs-write-back 1h --vfs-cache-max-age 24h --tpslimit 12 --tpslimit-burst 0 --disable-http2 --attr-timeout 1s --vfs-read-chunk-size 10M --dropbox-chunk-size 120M

The rclone config contents with secrets removed.

[wunderland]
type = dropbox
client_id = 
client_secret = 
token = 

[wunderland_crypt]
type = crypt
remote = wunderland:/wunderland
password = xxxxx
password2 = xxxxx

A log from the command with the -vv flag

2023/07/27 22:19:55 ERROR : Dropbox root 'xxxxxx': sync batch commit: failed to commit batch length 1: batch had 1 errors: last error: too_many_write_operations
2023/07/27 22:19:55 ERROR : TVShows/DE/Little Houses on the Prairie (1974)/S05/season.nfo: Failed to copy: upload failed: batch upload failed: too_many_write_operations
2023/07/27 22:19:55 ERROR : TVShows/DE/Little Houses on the Prairie (1974)/S05/season.nfo: vfs cache: failed to upload try #1, will retry in 5m0s: vfs cache: failed to transfer file from cache to remote: upload failed: batch upload failed: too_many_write_operations
2023/07/27 23:45:53 ERROR : Dropbox root 'xxxxxx': sync batch commit: failed to commit batch length 1: batch had 1 errors: last error: too_many_write_operations
2023/07/27 23:45:53 ERROR : TVShows/Synced/Luna Nera (2020)/Teailers/season.nfo: Failed to copy: upload failed: batch upload failed: too_many_write_operations
2023/07/27 23:45:53 ERROR : TVShows/Synced/Luna Nera (2020)/Teailers/season.nfo: vfs cache: failed to upload try #1, will retry in 5m0s: vfs cache: failed to transfer file from cache to remote: upload failed: batch upload failed: too_many_write_operations
2023/07/28 00:18:13 ERROR : Dropbox root 'xxxxxx': sync batch commit: failed to commit batch length 1: batch had 1 errors: last error: too_many_write_operations
2023/07/28 00:18:13 ERROR : TVShows/Synced/Outlander (2014)/S01/season.nfo: Failed to copy: upload failed: batch upload failed: too_many_write_operations
2023/07/28 00:18:13 ERROR : TVShows/Synced/Outlander (2014)/S01/season.nfo: vfs cache: failed to upload try #1, will retry in 5m0s: vfs cache: failed to transfer file from cache to remote: upload failed: batch upload failed: too_many_write_operations
2023/07/28 00:18:14 ERROR : Dropbox root 'xxxxxx': sync batch commit: failed to commit batch length 3: batch had 3 errors: last error: too_many_write_operations
2023/07/28 00:18:14 ERROR : TVShows/Synced/Outlander (2014)/S04/season.nfo: Failed to copy: upload failed: batch upload failed: too_many_write_operations
2023/07/28 00:18:14 ERROR : TVShows/Synced/Outlander (2014)/S04/season.nfo: vfs cache: failed to upload try #1, will retry in 5m0s: vfs cache: failed to transfer file from cache to remote: upload failed: batch upload failed: too_many_write_operations
2023/07/28 00:18:14 ERROR : TVShows/Synced/Outlander (2014)/S02/season.nfo: Failed to copy: upload failed: batch upload failed: too_many_write_operations
2023/07/28 00:18:14 ERROR : TVShows/Synced/Outlander (2014)/S02/season.nfo: vfs cache: failed to upload try #1, will retry in 5m0s: vfs cache: failed to transfer file from cache to remote: upload failed: batch upload failed: too_many_write_operations
2023/07/28 00:18:14 ERROR : TVShows/Synced/Outlander (2014)/S03/season.nfo: Failed to copy: upload failed: batch upload failed: too_many_write_operations
2023/07/28 00:18:14 ERROR : TVShows/Synced/Outlander (2014)/S03/season.nfo: vfs cache: failed to upload try #1, will retry in 5m0s: vfs cache: failed to transfer file from cache to remote: upload failed: batch upload failed: too_many_write_operations

Any reason for such a small chunk size? That will cause a lot more API hits.

Is any other application/thing using the same client ID/secret? If so make one specifically for the mount. For a lot of writes, you may want to go down to 10 and see if that works as there's no documentation specifically on how they rate limit the API so it's a bit of a guessing game.

I generally work fine with 12 TPS, but every so often, I do get that if I do a lot of renames in Sonarr or something along those lines.

no reason which value is the best ? and yes im using 4 mounts the same client/secret so i can create multiple id/secrets i didnt know it. ok im trying it.Can i just change the id/secret on config or i must relogin ? How i can get fixed the error "too_many_write_operations" thanks for the support. C

Use a specific client ID and secret for EACH mount. You have to reconnect it for it to pick up the new ID.

No idea what you mean. I got my values from real world testing.

By following what I've said :slight_smile:

i took youre config from movies and added each mount new apps now im testing. I have another issue . Emby doesent save meta data anymore on the drive i have set the right permission maybe that command (--umask 002) let me no save it ? Do you use team folder or the normal folder (only you can use it)? becouse on dropbox you can make team folders or use the normal folder.thanks for the support

Yes you have to set permissions in such way that this emby program user has write rights. Here more details on umask.

002 means:

Bit Targeted at File permission
0 Owner read, write and execute
0 Group read, write and execute
2 Others read and execute

Recently had some trouble with Dropbox as well. For me I was impersonating my own account, really limiting the amount of API calls I could make.

Every full scan would result in too_many_requests errors, even with tps-limit on 1.

Are you suggesting impersonation is causing too_many_requests errors?

I'm banging my head against the wall here with these errors with tpslimit=1 transfers=1 and checkers=1. I am also running impersonate so maybe this is it?

I guess that it's indeed. For me removing the impersonating helped a lot!

Maybe @ncw can shine his light on this?

I removed impersonate and am now able to go with default checkers/transfers and tpslimit=10. So it appears that impersonate basically breaks dropbox.

So glad you commented, I was going insane!

1 Like

There's no mention in your post about using impersonate. That's why we want the exact commands as could have saved more time...

This isn't my post. I was searching the forums for clues to my issue since none of the usual suspects (tpslimit, pacer, client id, etc) were relieving the API throttling I was seeing. I saw Joost1991's comment regarding impersonate and chimed in.

some thinks fixed when i remove the - umask 02 but it seams that performance not increasing... how do you fixed it ? @MrGB @Joost1991 what you do you mean with "removing the impersonating" Scans during to long...

it was nginx without https works great @Animosity022 with youre mount. do you use caddy right? do you have a good tutorial for me ? becouse i tryed and failed. thanks for the support

I do use Caddy. It really depends on what you want to do and what you are using.

I have my example one which might work. I think they've made some changes though recently so I may have to update it.

The Caddy forums are very helpful as well.