Storj error reading destination directory: uplink: too many requests

What is the problem you are having with rclone?

I'm using rclone to sync a directory with Storj for backup. The logfile is filled with errors along the lines of "error reading destination directory: uplink: too many requests". These errors have been around since I started using rclone with Storj, but lately it became so bad that I can never finish a sync without errors, however many attempts there are.

I saw that there was an issue with integration tests in the past where Storj introduced a rate limit that wasn't accounted for. Could this also be the result of a new rate limit? There are no errors to do with files, they are all about reading directories.

Run the command 'rclone version' and share the full output of the command.

root@rclone:~ # rclone version
rclone v1.65.1-DEV

  • os/version: freebsd 13.1-release-p9 (64 bit)
  • os/kernel: 13.1-release-p9 (amd64)
  • os/type: freebsd
  • os/arch: amd64
  • go/version: go1.20.12
  • go/linking: dynamic
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Storj

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync /mnt/Vault crypt_storj: -c --log-file=/root/rclone.log 

The rclone config contents with secrets removed.

root@rclone:~ # cat .config/rclone/rclone.conf
[storj]
type = storj
access_grant = *REDACTED*
satellite_address = eu1.storj.io
passphrase = *REDACTED*

[crypt_storj]
type = crypt
remote = *REDACTED*
password = *REDACTED*
password2 = *REDACTED*

A log from the command with the -vv flag

root@rclone:~ # tail -n 25 rclone.log
2024/02/03 18:20:43 ERROR : d/ZV/GUYPCIV4Q33GQOKV6WS6GM46CSGNG2: error reading destination directory: uplink: too many requests
2024/02/03 18:20:43 ERROR : d/ZN/ZBFIZIUFVY4TEEQPNQNDRRJ7BHGANE: error reading destination directory: uplink: too many requests
2024/02/03 18:20:43 ERROR : d/ZW/7WK4OYF442L6OYW5BMPLGGA6YWMJYY: error reading destination directory: uplink: too many requests
2024/02/03 18:20:43 ERROR : d/ZP/XQDFTMJMGOLRPDP24NRUTEZWDNPQSU: error reading destination directory: uplink: too many requests
2024/02/03 18:20:43 ERROR : d/ZW/A4GKSUELSX73ZO57BVCVOHNWY5NVRL: error reading destination directory: uplink: too many requests
2024/02/03 18:20:43 ERROR : d/ZQ/Z5PXORXYLIQHL3Y7RYSDHZREEU5PDU: error reading destination directory: uplink: too many requests
2024/02/03 18:20:43 ERROR : d/ZP/Y3MNCBMFEX65AW2DMXZ3MJFQD5U3P6: error reading destination directory: uplink: too many requests
2024/02/03 18:20:43 ERROR : d/ZV/HO2QGIOU5S6OLRE2HJY5742GMQSUEY: error reading destination directory: uplink: too many requests

This is an extract, there are way more files and errors, but that's pretty much what happens. Some do get checked, but these errors pop up once in a while.

Native Storj backend is extremely network heavy. It requires 110 open TCP connections for every 64 MB upload segment, sends ~3 time file size etc. It can for bigger transfers result in thousands of nodes you are connecting to. Simply many networks, routers or small computers can not handle it well.

I would recommend instead to try S3 gateway - this way of connecting to Storj is much more appropriate for home users. In Storj web console create S3 credentials and then create rclone S3 remote.

I have tried to use both modes and switched to S3 now completely. Native mode was killing my network and often got errors like you.

Actually, I think I solved it! :smile: I turned down the number of --checkers to 1 and now there's no rate limiters being hit.

I actually tried that first and it worked great. The number of segments was too high by default - you should change the chunk size for multi-part upload to 64MB to be efficient. I switched to native pretty much to use Storj more like it's intended to be used and to avoid revealing the password to the gateway.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.